Next Issue
Volume 29, October
Previous Issue
Volume 29, June
 
 

Math. Comput. Appl., Volume 29, Issue 4 (August 2024) – 24 articles

Cover Story (view full-size image): Finite element modeling has become one of the main tools necessary for understanding cardiovascular homeostasis and lesion progression. The accuracy of such simulations significantly depends on the precision of material parameters, which are obtained via the mechanical characterization process, i.e., experimental testing and material parameter estimation using the optimization process. The process of mounting specimens on the machine often introduces slight preloading to avoid sagging and to ensure perpendicular orientation with respect to the loading axes. As such, the reference configuration proposes non-zero forces at zero-state displacements. This error further extends to material parameters’ estimation where initial loading is usually manually annulled. In this work, we have developed a new computational procedure that includes prestretches during mechanical characterization. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
28 pages, 73507 KiB  
Article
Numerical Modelling of Corrugated Paperboard Boxes
by Rhoda Ngira Aduke, Martin P. Venter and Corné J. Coetzee
Math. Comput. Appl. 2024, 29(4), 70; https://doi.org/10.3390/mca29040070 - 22 Aug 2024
Viewed by 305
Abstract
Numerical modelling of corrugated paperboard is quite challenging due to its waved geometry and material non-linearity which is affected by the material properties of the individual paper sheets. Because of the complex geometry and material behaviour of the board, there is still scope [...] Read more.
Numerical modelling of corrugated paperboard is quite challenging due to its waved geometry and material non-linearity which is affected by the material properties of the individual paper sheets. Because of the complex geometry and material behaviour of the board, there is still scope to enhance the accuracy of current modelling techniques as well as gain a better understanding of the structural performance of corrugated paperboard packaging for improved packaging design. In this study, four-point bending tests were carried out to determine the bending stiffness of un-creased samples in the machine direction (MD) and cross direction (CD). Bending tests were also carried out on creased samples with the fluting oriented in the CD with the crease at the centre. Inverse analysis was applied using the results from the bending tests to determine the material properties that accurately predict the bending stiffness of the horizontal creases, vertical creases, and panels of a box under compression loading. The finite element model of the box was divided into three sections, the horizontal creases, vertical creases, and the box panels. Each of these sections is described using different material properties. The box edges/corners are described using the optimal material properties from bending and compression tests conducted on creased samples, while the box panels are described using the optimal material properties obtained from four-point bending tests conducted on samples without creases. A homogenised finite element (FE) model of a box was simulated using the obtained material properties and validated using experimental results. The developed FE model accurately predicted the failure load of a corrugated paperboard box under compression with a variation of 0.1% when compared to the experimental results. Full article
Show Figures

Figure 1

22 pages, 1834 KiB  
Article
Analyzing Bifurcations and Optimal Control Strategies in SIRS Epidemic Models: Insights from Theory and COVID-19 Data
by Mohamed Cherif Belili, Mohamed Lamine Sahari, Omar Kebiri and Halim Zeghdoudi
Math. Comput. Appl. 2024, 29(4), 69; https://doi.org/10.3390/mca29040069 - 21 Aug 2024
Viewed by 346
Abstract
This study investigates the dynamic behavior of an SIRS epidemic model in discrete time, focusing primarily on mathematical analysis. We identify two equilibrium points, disease-free and endemic, with our main focus on the stability of the endemic state. Using data from the US [...] Read more.
This study investigates the dynamic behavior of an SIRS epidemic model in discrete time, focusing primarily on mathematical analysis. We identify two equilibrium points, disease-free and endemic, with our main focus on the stability of the endemic state. Using data from the US Department of Health and optimizing the SIRS model, we estimate model parameters and analyze two types of bifurcations: Flip and Transcritical. Bifurcation diagrams and curves are presented, employing the Carcasses method. for the Flip bifurcation and an implicit function approach for the Transcritical bifurcation. Finally, we apply constrained optimal control to the infection and recruitment rates in the discrete SIRS model. Pontryagin’s maximum principle is employed to determine the optimal controls. Utilizing COVID-19 data from the USA, we showcase the effectiveness of the proposed control strategy in mitigating the pandemic’s spread. Full article
(This article belongs to the Collection Mathematical Modelling of COVID-19)
Show Figures

Figure 1

15 pages, 4012 KiB  
Article
Estimation of Anthocyanins in Heterogeneous and Homogeneous Bean Landraces Using Probabilistic Colorimetric Representation with a Neuroevolutionary Approach
by José-Luis Morales-Reyes, Elia-Nora Aquino-Bolaños, Héctor-Gabriel Acosta-Mesa and Aldo Márquez-Grajales
Math. Comput. Appl. 2024, 29(4), 68; https://doi.org/10.3390/mca29040068 - 19 Aug 2024
Viewed by 393
Abstract
The concentration of anthocyanins in common beans indicates their nutritional value. Understanding this concentration makes it possible to identify the functional compounds present. Previous studies have presented color characterization as two-dimensional histograms, based on the probability mass function. In this work, we proposed [...] Read more.
The concentration of anthocyanins in common beans indicates their nutritional value. Understanding this concentration makes it possible to identify the functional compounds present. Previous studies have presented color characterization as two-dimensional histograms, based on the probability mass function. In this work, we proposed a new type of color characterization represented by three two-dimensional histograms that consider chromaticity and luminosity channels in order to verify the robustness of the information. Using a neuroevolutionary approach, we also found a convolutional neural network (CNN) for the regression task. The results demonstrate that using three two-dimensional histograms increases the accuracy compared to the color characterization represented by one two-dimensional histogram. As a result, the precision was 93.00 ± 5.26 for the HSI color space and 94.30 ± 8.61 for CIE L*a*b*. Our procedure is suitable for estimating anthocyanins in homogeneous and heterogeneous colored bean landraces. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2023)
Show Figures

Figure 1

20 pages, 2538 KiB  
Article
Comparison of Interval Type-3 Mamdani and Sugeno Models for Fuzzy Aggregation Applied to Ensemble Neural Networks for Mexican Stock Exchange Time Series Prediction
by Martha Pulido, Patricia Melin, Oscar Castillo and Juan R. Castro
Math. Comput. Appl. 2024, 29(4), 67; https://doi.org/10.3390/mca29040067 - 19 Aug 2024
Viewed by 774
Abstract
In this work, interval type-2 and type-3 fuzzy systems were designed, of Mamdani and Sugeno types, for time series prediction. The aggregation performed by the type-2 and type-3 fuzzy systems was carried out by using the results of an optimized ensemble neural network [...] Read more.
In this work, interval type-2 and type-3 fuzzy systems were designed, of Mamdani and Sugeno types, for time series prediction. The aggregation performed by the type-2 and type-3 fuzzy systems was carried out by using the results of an optimized ensemble neural network (ENN) obtained with the particle swarm optimization algorithm. The time series data that were used were of the Mexican stock exchange. The method finds the best prediction error. This method consists of the aggregation of the responses of the ENN with type-2 and type-3 fuzzy systems. In this case, the systems consist of five inputs and one output. Each input is made up of two membership functions and there are 32 possible fuzzy if-then rules. The simulation results show that the approach with type-2 and type-3 fuzzy systems provides a good prediction of the Mexican stock exchange. Statistical tests of the comparison of type-1, type-2, and type-3 fuzzy systems are also presented. Full article
(This article belongs to the Section Engineering)
Show Figures

Figure 1

19 pages, 2402 KiB  
Article
Dynamical Properties of Perturbed Hill’s System
by Mohammed K. Ibrahim, Taha Rabeh and Elbaz I. Abouelmagd
Math. Comput. Appl. 2024, 29(4), 66; https://doi.org/10.3390/mca29040066 - 19 Aug 2024
Viewed by 692
Abstract
In this work, some dynamical properties of Hill’s system are studied under the effect of continued fraction perturbation. The locations and kinds of equilibrium points are identified, and it is demonstrated that these points are saddle points and the general motion in their [...] Read more.
In this work, some dynamical properties of Hill’s system are studied under the effect of continued fraction perturbation. The locations and kinds of equilibrium points are identified, and it is demonstrated that these points are saddle points and the general motion in their proximity is unstable. Furthermore, the curves of zero velocity and the regions of possible motion are defined at different Jacobian constant values. It is shown that the regions of forbidden motion increase with increasing Jacobian constant values and there is a noticeable decrease in the permissible regions of motion, leading to the possibility that the body takes a path far away from the primary body and escapes to take an unknown trajectory. Furthermore, the stability of perturbed motion is analyzed from the perspective of a linear sense, and it is observed that the linear motion is also unstable. Full article
Show Figures

Figure 1

26 pages, 8222 KiB  
Article
Enhancing LS-PIE’s Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation
by Jesse Stevens, Daniel N. Wilke and Isaac I. Setshedi
Math. Comput. Appl. 2024, 29(4), 65; https://doi.org/10.3390/mca29040065 - 16 Aug 2024
Viewed by 446
Abstract
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The [...] Read more.
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares these methods across multiple datasets, including a simple toy problem, mixed signals, ECG data, and simulated vibrational data. LEXP can accelerate the discovery of optimal latent spaces and may yield different compact spaces from LCON, depending on the LVM. This paper highlights the LS-PIE algorithm’s applications and compares LCON and LEXP in organising, ranking, and scoring latent components akin to principal component analysis or singular value decomposition. This paper shows clear improvements in the interpretability of the resulting latent representations allowing for clearer and more focused analysis. Full article
Show Figures

Figure 1

13 pages, 2386 KiB  
Article
Evaluation of the Impact of Stress Distribution on Polyurethane Trileaflet Heart Valve Leaflets in the Open Configuration by Employing Numerical Simulation
by Lebohang Reginald Masheane, Willie du Preez and Jacques Combrinck
Math. Comput. Appl. 2024, 29(4), 64; https://doi.org/10.3390/mca29040064 - 10 Aug 2024
Viewed by 402
Abstract
It is costly and time-consuming to design and manufacture functional polyurethane heart valve prototypes, to evaluate and comprehend their hemodynamic behaviour. To enhance the rapid and effective design of replacement heart valves, to meet the minimum criteria of FDA and ISO regulations and [...] Read more.
It is costly and time-consuming to design and manufacture functional polyurethane heart valve prototypes, to evaluate and comprehend their hemodynamic behaviour. To enhance the rapid and effective design of replacement heart valves, to meet the minimum criteria of FDA and ISO regulations and specifications, and to reduce the length of required clinical testing, computational fluid dynamics (CFD) and finite element analysis (FEA) were used. The results revealed that when the flexibility of the stent was taken into consideration with a uniform leaflet thickness, stress concentration regions that were present close to the commissural attachment were greatly diminished. Furthermore, it was found that the stress on the leaflets was directly impacted by the effect of reducing the post height on both rigid and flexible stents. When varying the leaflet thickness was considered, the high-stress distribution close to the commissures appeared to reduce at thicker leaflet regions. However, thicker leaflets may result in a stiffer valve with a corresponding increase in pressure drop. It was concluded that a leaflet with predefined varying thickness may be a better option. Full article
Show Figures

Figure 1

8 pages, 437 KiB  
Article
Accurate Analytical Approximation for the Bessel Function J2(x)
by Pablo Martin, Juan Pablo Ramos-Andrade, Fabián Caro-Pérez and Freddy Lastra
Math. Comput. Appl. 2024, 29(4), 63; https://doi.org/10.3390/mca29040063 - 9 Aug 2024
Viewed by 487
Abstract
We obtain an accurate analytic approximation for the Bessel function J2(x) using an improved multipoint quasirational approximation technique (MPQA). This new approximation is valid for all real values of the variable x, with a maximum absolute error of [...] Read more.
We obtain an accurate analytic approximation for the Bessel function J2(x) using an improved multipoint quasirational approximation technique (MPQA). This new approximation is valid for all real values of the variable x, with a maximum absolute error of approximately 0.009. These errors have been analyzed in the interval from x=0 to x=1000, and we have found that the absolute errors for large x decrease logarithmically. The values of x at which the zeros of the exact function J2(x) and the approximated function J˜2(x) occur are also provided, exhibiting very small relative errors. The largest relative error is for the second zero, with εrel=0.0004, and the relative errors continuously decrease, reaching 0.0001 for the eleventh zero. The procedure to obtain this analytic approximation involves constructing a bridge function that connects the power series with the asymptotic approximation. This is achieved by using rational functions combined with other elementary functions, such as trigonometric and fractional power functions. Full article
Show Figures

Figure 1

20 pages, 3009 KiB  
Article
Mathematical Structure of RelB Dynamics in the NF-κB Non-Canonical Pathway
by Toshihito Umegaki, Naoya Hatanaka and Takashi Suzuki
Math. Comput. Appl. 2024, 29(4), 62; https://doi.org/10.3390/mca29040062 - 5 Aug 2024
Viewed by 465
Abstract
This study analyzed the non-canonical NF-κB pathway, which controls functions distinct from those of the canonical pathway. Although oscillations of NF-κB have been observed in the non-canonical pathway, a detailed mechanism explaining the observed behavior remains elusive, owing to [...] Read more.
This study analyzed the non-canonical NF-κB pathway, which controls functions distinct from those of the canonical pathway. Although oscillations of NF-κB have been observed in the non-canonical pathway, a detailed mechanism explaining the observed behavior remains elusive, owing to the different behaviors observed across cell types. This study demonstrated that oscillations cannot be produced by the experimentally observed pathway alone, thereby suggesting the existence of an unknown reaction pathway. Assuming this pathway, it became evident that the oscillatory structure of the non-canonical pathway was caused by stable periodic orbits. In addition, we demonstrated that altering the expression levels of specific proteins reproduced various behaviors. By fitting 14 parameters, excluding those measured in previous studies, this study successfully reproduce nuclear retention (saturation), oscillation, and singular events that had been experimentally confirmed. The analysis also provided a comprehensive understanding of the dynamics of the RelB protein and suggested a potential inhibitory role for the unknown factor. These findings indicate that the unknown factor may be an isoform of IκB, contributing to the regulation of NF-κB signaling. Based on these models, we gained invaluable understanding of biological systems, paving the way for the development of new strategies to manipulate specific biological processes. Full article
Show Figures

Figure 1

26 pages, 6480 KiB  
Article
Linear and Non-Linear Regression Methods for the Prediction of Lower Facial Measurements from Upper Facial Measurements
by Jacques Terblanche, Johan van der Merwe and Ryno Laubscher
Math. Comput. Appl. 2024, 29(4), 61; https://doi.org/10.3390/mca29040061 - 31 Jul 2024
Viewed by 503
Abstract
Accurate assessment and prediction of mandible shape are fundamental prerequisites for successful orthognathic surgery. Previous studies have predominantly used linear models to predict lower facial structures from facial landmarks or measurements; the prediction errors for this did not meet clinical tolerances. This paper [...] Read more.
Accurate assessment and prediction of mandible shape are fundamental prerequisites for successful orthognathic surgery. Previous studies have predominantly used linear models to predict lower facial structures from facial landmarks or measurements; the prediction errors for this did not meet clinical tolerances. This paper compared non-linear models, namely a Multilayer Perceptron (MLP), a Mixture Density Network (MDN), and a Random Forest (RF) model, with a Linear Regression (LR) model in an attempt to improve prediction accuracy. The models were fitted to a dataset of measurements from 155 subjects. The test-set mean absolute errors (MAEs) for distance-based target features for the MLP, MDN, RF, and LR models were respectively 2.77 mm, 2.79 mm, 2.95 mm, and 2.91 mm. Similarly, the MAEs for angle-based features were 3.09°, 3.11°, 3.07°, and 3.12° for each model, respectively. All models had comparable performance, with neural network-based methods having marginally fewer errors outside of clinical specifications. Therefore, while non-linear methods have the potential to outperform linear models in the prediction of lower facial measurements from upper facial measurements, current results suggest that further refinement is necessary prior to clinical use. Full article
Show Figures

Figure 1

10 pages, 244 KiB  
Article
Noether Symmetries of the Triple Degenerate DNLS Equations
by Ugur Camci
Math. Comput. Appl. 2024, 29(4), 60; https://doi.org/10.3390/mca29040060 - 30 Jul 2024
Viewed by 450
Abstract
In this paper, Lie symmetries and Noether symmetries along with the corresponding conservation laws are derived for weakly nonlinear dispersive magnetohydrodynamic wave equations, also known as the triple degenerate derivative nonlinear Schrödinger equations. The main goal of this study is to obtain Noether [...] Read more.
In this paper, Lie symmetries and Noether symmetries along with the corresponding conservation laws are derived for weakly nonlinear dispersive magnetohydrodynamic wave equations, also known as the triple degenerate derivative nonlinear Schrödinger equations. The main goal of this study is to obtain Noether symmetries of the second-order Lagrangian density for these equations using the Noether symmetry approach with a gauge term. For this Lagrangian density, we compute the conserved densities and fluxes corresponding to the Noether symmetries with a gauge term, which differ from the conserved densities obtained using Lie symmetries in Webb et al. (J. Plasma Phys. 1995, 54, 201–244; J. Phys. A Math. Gen. 1996, 29, 5209–5240). Furthermore, we find some new Lie symmetries of the dispersive triple degenerate derivative nonlinear Schrödinger equations for non-vanishing integration functions Ki(t) (i=1,2,3). Full article
(This article belongs to the Special Issue Symmetry Methods for Solving Differential Equations)
16 pages, 2822 KiB  
Article
FutureCite: Predicting Research Articles’ Impact Using Machine Learning and Text and Graph Mining Techniques
by Maha A. Thafar, Mashael M. Alsulami and Somayah Albaradei
Math. Comput. Appl. 2024, 29(4), 59; https://doi.org/10.3390/mca29040059 - 21 Jul 2024
Viewed by 698
Abstract
The growth in academic and scientific publications has increased very rapidly. Researchers must choose a representative and significant literature for their research, which has become challenging worldwide. Usually, the paper citation number indicates this paper’s potential influence and importance. However, this standard metric [...] Read more.
The growth in academic and scientific publications has increased very rapidly. Researchers must choose a representative and significant literature for their research, which has become challenging worldwide. Usually, the paper citation number indicates this paper’s potential influence and importance. However, this standard metric of citation numbers is not suitable to assess the popularity and significance of recently published papers. To address this challenge, this study presents an effective prediction method called FutureCite to predict the future citation level of research articles. FutureCite integrates machine learning with text and graph mining techniques, leveraging their abilities in classification, datasets in-depth analysis, and feature extraction. FutureCite aims to predict future citation levels of research articles applying a multilabel classification approach. FutureCite can extract significant semantic features and capture the interconnection relationships found in scientific articles during feature extraction using textual content, citation networks, and metadata as feature resources. This study’s objective is to contribute to the advancement of effective approaches impacting the citation counts in scientific publications by enhancing the precision of future citations. We conducted several experiments using a comprehensive publication dataset to evaluate our method and determine the impact of using a variety of machine learning algorithms. FutureCite demonstrated its robustness and efficiency and showed promising results based on different evaluation metrics. Using the FutureCite model has significant implications for improving the researchers’ ability to determine targeted literature for their research and better understand the potential impact of research publications. Full article
(This article belongs to the Topic New Advances in Granular Computing and Data Mining)
Show Figures

Graphical abstract

12 pages, 5151 KiB  
Article
Unraveling Time Series Dynamics: Evaluating Partial Autocorrelation Function Distribution and Its Implications
by Hossein Hassani, Leila Marvian, Masoud Yarmohammadi and Mohammad Reza Yeganegi
Math. Comput. Appl. 2024, 29(4), 58; https://doi.org/10.3390/mca29040058 - 19 Jul 2024
Viewed by 742
Abstract
The objective of this paper is to assess the distribution of the Partial Autocorrelation Function (PACF), both theoretically and empirically, emphasizing its crucial role in modeling and forecasting time series data. Additionally, it evaluates the deviation of the sum of sample PACF from [...] Read more.
The objective of this paper is to assess the distribution of the Partial Autocorrelation Function (PACF), both theoretically and empirically, emphasizing its crucial role in modeling and forecasting time series data. Additionally, it evaluates the deviation of the sum of sample PACF from normality: identifying the lag at which departure occurs. Our investigation reveals that the sum of the sample PACF, and consequently its components, diverges from the expected normal distribution beyond a certain lag. This observation challenges conventional assumptions in time series modeling and forecasting, indicating a necessity for reassessment of existing methodologies. Through our analysis, we illustrate the practical implications of our findings using real-world scenarios, highlighting their significance in unraveling complex data patterns. This study delves into 185 years of monthly Bank of England Rate data, utilizing this extensive dataset to conduct an empirical analysis. Furthermore, our research paves the way for future exploration, offering insights into the complexities and potential revisions in time series analysis, modeling, and forecasting. Full article
Show Figures

Figure 1

16 pages, 2099 KiB  
Article
Modeling of the Human Cardiovascular System: Implementing a Sliding Mode Observer for Fault Detection and Isolation
by Dulce A. Serrano-Cruz, Latifa Boutat-Baddas, Mohamed Darouach, Carlos M. Astorga-Zaragoza and Gerardo V. Guerrero Ramírez
Math. Comput. Appl. 2024, 29(4), 57; https://doi.org/10.3390/mca29040057 - 17 Jul 2024
Viewed by 645
Abstract
This paper presents a mathematical model of the cardiovascular system (CVS) designed to simulate both normal and pathological conditions within the systemic circulation. The model introduces a novel representation of the CVS through a change of coordinates, transforming it into the “quadratic normal [...] Read more.
This paper presents a mathematical model of the cardiovascular system (CVS) designed to simulate both normal and pathological conditions within the systemic circulation. The model introduces a novel representation of the CVS through a change of coordinates, transforming it into the “quadratic normal form”. This model facilitates the implementation of a sliding mode observer (SMO), allowing for the estimation of system states and the detection of anomalies, even though the system is linearly unobservable. The primary focus is on identifying valvular heart diseases, which are significant risk factors for cardiovascular diseases. The model’s validity is confirmed through simulations that replicate hemodynamic parameters, aligning with existing literature and experimental data. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)
Show Figures

Figure 1

18 pages, 625 KiB  
Article
Computational Cost Reduction in Multi-Objective Feature Selection Using Permutational-Based Differential Evolution
by Jesús-Arnulfo Barradas-Palmeros, Efrén Mezura-Montes, Rafael Rivera-López, Hector-Gabriel Acosta-Mesa and Aldo Márquez-Grajales
Math. Comput. Appl. 2024, 29(4), 56; https://doi.org/10.3390/mca29040056 - 13 Jul 2024
Viewed by 607
Abstract
Feature selection is a preprocessing step in machine learning that aims to reduce dimensionality and improve performance. The approaches for feature selection are often classified according to the evaluation of a subset of features as filter, wrapper, and embedded approaches. The high performance [...] Read more.
Feature selection is a preprocessing step in machine learning that aims to reduce dimensionality and improve performance. The approaches for feature selection are often classified according to the evaluation of a subset of features as filter, wrapper, and embedded approaches. The high performance of wrapper approaches for feature selection is associated at the same time with the disadvantage of high computational cost. Cost-reduction mechanisms for feature selection have been proposed in the literature, where competitive performance is achieved more efficiently. This work applies the simple and effective resource-saving mechanisms of the fixed and incremental sampling fraction strategies with memory to avoid repeated evaluations in multi-objective permutational-based differential evolution for feature selection. The selected multi-objective approach is an extension of the DE-FSPM algorithm with the selection mechanism of the GDE3 algorithm. The results showed high resource savings, especially in computational time and the number of evaluations required for the search process. Nonetheless, it was also detected that the algorithm’s performance was diminished. Therefore, the results reported in the literature on the effectiveness of the strategies for cost reduction in single-objective feature selection were only partially sustained in multi-objective feature selection. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)
Show Figures

Figure 1

21 pages, 6809 KiB  
Article
Improved Mechanical Characterization of Soft Tissues Including Mounting Stretches
by Toni Škugor, Lana Virag, Gerhard Sommer and Igor Karšaj
Math. Comput. Appl. 2024, 29(4), 55; https://doi.org/10.3390/mca29040055 - 12 Jul 2024
Viewed by 611
Abstract
Finite element modeling has become one of the main tools necessary for understanding cardiovascular homeostasis and lesion progression. The accuracy of such simulations significantly depends on the precision of material parameters, which are obtained via the mechanical characterization process, i.e., experimental testing and [...] Read more.
Finite element modeling has become one of the main tools necessary for understanding cardiovascular homeostasis and lesion progression. The accuracy of such simulations significantly depends on the precision of material parameters, which are obtained via the mechanical characterization process, i.e., experimental testing and material parameter estimation using the optimization process. The process of mounting specimens on the machine often introduces slight preloading to avoid sagging and to ensure perpendicular orientation with respect to the loading axes. As such, the reference configuration proposes non-zero forces at zero-state displacements. This error further extends to the material parameters’ estimation where initial loading is usually manually annulled. In this work, we have developed a new computational procedure that includes prestretches during mechanical characterization. The verification of the procedure was performed on the series of simulated virtual planar biaxial experiments using the Gasser–Ogden–Holzapfel material model where the exact material parameters could be set and compared to the obtained ones. Furthermore, we have applied our procedure to the data gathered from biaxial experiments on aortic tissue and compared it with the results obtained through standard optimization procedure. The analysis has shown a significant difference between the material parameters obtained. The rate of error increases with the prestretches and decreases with an increase in maximal experimental stretches. Full article
Show Figures

Figure 1

17 pages, 360 KiB  
Article
Novel Results on Legendre Polynomials in the Sense of a Generalized Fractional Derivative
by Francisco Martínez, Mohammed K. A. Kaabar and Inmaculada Martínez
Math. Comput. Appl. 2024, 29(4), 54; https://doi.org/10.3390/mca29040054 - 12 Jul 2024
Viewed by 931
Abstract
In this article, new results are investigated in the context of the recently introduced Abu-Shady–Kaabar fractional derivative. First, we solve the generalized Legendre fractional differential equation. As in the classical case, the generalized Legendre polynomials constitute notable solutions to the aforementioned fractional differential [...] Read more.
In this article, new results are investigated in the context of the recently introduced Abu-Shady–Kaabar fractional derivative. First, we solve the generalized Legendre fractional differential equation. As in the classical case, the generalized Legendre polynomials constitute notable solutions to the aforementioned fractional differential equation. In the sense of the fractional derivative of Abu-Shady–Kaabar, we establish important properties of the generalized Legendre polynomials such as Rodrigues formula and recurrence relations. Special attention is also devoted to another very important property of Legendre polynomials and their orthogonal character. Finally, the representation of a function fLα2([1,1]) in a series of generalized Legendre polynomials is addressed. Full article
1 pages, 148 KiB  
Correction
Correction: Angelova et al. Estimating Surface EMG Activity of Human Upper Arm Muscles Using InterCriteria Analysis. Math. Comput. Appl. 2024, 29, 8
by Silvija Angelova, Maria Angelova and Rositsa Raikova
Math. Comput. Appl. 2024, 29(4), 53; https://doi.org/10.3390/mca29040053 - 11 Jul 2024
Viewed by 326
Abstract
Due to imprecise meaning in the original publication [...] Full article
32 pages, 8070 KiB  
Article
A Condition-Monitoring Methodology Using Deep Learning-Based Surrogate Models and Parameter Identification Applied to Heat Pumps
by Pieter Rousseau and Ryno Laubscher
Math. Comput. Appl. 2024, 29(4), 52; https://doi.org/10.3390/mca29040052 - 5 Jul 2024
Viewed by 890
Abstract
Online condition-monitoring techniques that are used to reveal incipient faults before breakdowns occur are typically data-driven or model-based. We propose the use of a fundamental physics-based thermofluid model of a heat pump cycle combined with deep learning-based surrogate models and parameter identification in [...] Read more.
Online condition-monitoring techniques that are used to reveal incipient faults before breakdowns occur are typically data-driven or model-based. We propose the use of a fundamental physics-based thermofluid model of a heat pump cycle combined with deep learning-based surrogate models and parameter identification in order to simultaneously detect, locate, and quantify degradation occurring in the different components. The methodology is demonstrated with the aid of synthetically generated data, which include the effect of measurement uncertainty. A “forward” neural network surrogate model is trained and then combined with parameter identification which minimizes the residuals between the surrogate model results and the measured plant data. For the forward approach using four measured performance parameters with 100 or more measured data points, very good prediction accuracy is achieved, even with as much as 20% noise imposed on the measured data. Very good accuracy is also achieved with as few as 10 measured data points with noise up to 5%. However, prediction accuracy is reduced with less data points and more measurement uncertainty. A “backward” neural network surrogate model can also be applied directly without parameter identification and is therefore much faster. However, it is more challenging to train and produce less accurate predictions. The forward approach is fast enough so that the calculation time does not impede its application in practice, and it can still be applied if some of the measured performance parameters are no longer available, due to sensor failure for instance, albeit with reduced accuracy. Full article
Show Figures

Figure 1

15 pages, 823 KiB  
Article
H State and Parameter Estimation for Lipschitz Nonlinear Systems
by Pedro Eusebio Alvarado-Méndez, Carlos M. Astorga-Zaragoza, Gloria L. Osorio-Gordillo, Adriana Aguilera-González, Rodolfo Vargas-Méndez and Juan Reyes-Reyes
Math. Comput. Appl. 2024, 29(4), 51; https://doi.org/10.3390/mca29040051 - 4 Jul 2024
Viewed by 628
Abstract
A H robust adaptive nonlinear observer for state and parameter estimation of a class of Lipschitz nonlinear systems with disturbances is presented in this work. The objective is to estimate parameters and monitor the performance of nonlinear processes with model uncertainties. The [...] Read more.
A H robust adaptive nonlinear observer for state and parameter estimation of a class of Lipschitz nonlinear systems with disturbances is presented in this work. The objective is to estimate parameters and monitor the performance of nonlinear processes with model uncertainties. The behavior of the observer in the presence of disturbances is analyzed using Lyapunov stability theory and by considering an H performance criterion. Numerical simulations were carried out to demonstrate the applicability of this observer for a semi-active car suspension. The adaptive observer performed well in estimating the tire rigidity (as an unknown parameter) and induced disturbances representing damage to the damper. The main contribution is the proposal of an alternative methodology for simultaneous parameter and actuator disturbance estimation for a more general class of nonlinear systems. Full article
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2024)
Show Figures

Figure 1

13 pages, 314 KiB  
Article
Fuzzy Bipolar Hypersoft Sets: A Novel Approach for Decision-Making Applications
by Baravan A. Asaad, Sagvan Y. Musa and Zanyar A. Ameen
Math. Comput. Appl. 2024, 29(4), 50; https://doi.org/10.3390/mca29040050 - 2 Jul 2024
Viewed by 680
Abstract
This article presents a pioneering mathematical model, fuzzy bipolar hypersoft (FBHS) sets, which combines the bipolarity of parameters with the fuzziness of data. Motivated by the need for a comprehensive framework capable of addressing uncertainty and variability in complex phenomena, our approach introduces [...] Read more.
This article presents a pioneering mathematical model, fuzzy bipolar hypersoft (FBHS) sets, which combines the bipolarity of parameters with the fuzziness of data. Motivated by the need for a comprehensive framework capable of addressing uncertainty and variability in complex phenomena, our approach introduces a novel method for representing both the presence and absence of parameters through FBHS sets. By employing two mappings to estimate positive and negative fuzziness levels, we bridge the gap between bipolarity, fuzziness, and parameterization, allowing for more realistic simulations of multifaceted scenarios. Compared to existing models like bipolar fuzzy hypersoft (BFHS) sets, FBHS sets offer a more intuitive and user-friendly approach to modeling phenomena involving bipolarity, fuzziness, and parameterization. This advantage is underscored by a detailed comparison and a practical example illustrating FBHS sets’ superiority in modeling such phenomena. Additionally, this paper provides an in-depth exploration of fundamental FBHS set operations, highlighting their robustness and applicability in various contexts. Finally, we demonstrate the practical utility of FBHS sets in problem-solving and introduce an algorithm for optimal object selection based on available information sets, further emphasizing the advantages of our proposed framework. Full article
27 pages, 649 KiB  
Review
IoT-Driven Transformation of Circular Economy Efficiency: An Overview
by Zenonas Turskis and Violeta Šniokienė
Math. Comput. Appl. 2024, 29(4), 49; https://doi.org/10.3390/mca29040049 - 28 Jun 2024
Viewed by 1038
Abstract
The intersection of the Internet of Things (IoT) and the circular economy (CE) creates a revolutionary opportunity to redefine economic sustainability and resilience. This review article explores the intricate interplay between IoT technologies and CE economics, investigating how the IoT transforms supply chain [...] Read more.
The intersection of the Internet of Things (IoT) and the circular economy (CE) creates a revolutionary opportunity to redefine economic sustainability and resilience. This review article explores the intricate interplay between IoT technologies and CE economics, investigating how the IoT transforms supply chain management, optimises resources, and revolutionises business models. IoT applications boost efficiency, reduce waste, and prolong product lifecycles through data analytics, real-time tracking, and automation. The integration of the IoT also fosters the emergence of inventive circular business models, such as product-as-a-service and sharing economies, offering economic benefits and novel market opportunities. This amalgamation with the IoT holds substantial implications for sustainability, advancing environmental stewardship and propelling economic growth within emerging CE marketplaces. This comprehensive review unfolds a roadmap for comprehending and implementing the pivotal components propelling the IoT’s transformation toward CE economics, nurturing a sustainable and resilient future. Embracing IoT technologies, the authors embark on a journey transcending mere efficiency, heralding an era where economic progress harmonises with full environmental responsibility and the CE’s promise. Full article
24 pages, 3501 KiB  
Article
Induction of Convolutional Decision Trees with Success-History-Based Adaptive Differential Evolution for Semantic Segmentation
by Adriana-Laura López-Lobato, Héctor-Gabriel Acosta-Mesa and Efrén Mezura-Montes
Math. Comput. Appl. 2024, 29(4), 48; https://doi.org/10.3390/mca29040048 - 27 Jun 2024
Viewed by 630
Abstract
Semantic segmentation is an essential process in computer vision that allows users to differentiate objects of interest from the background of an image by assigning labels to the image pixels. While Convolutional Neural Networks have been widely used to solve the image segmentation [...] Read more.
Semantic segmentation is an essential process in computer vision that allows users to differentiate objects of interest from the background of an image by assigning labels to the image pixels. While Convolutional Neural Networks have been widely used to solve the image segmentation problem, simpler approaches have recently been explored, especially in fields where explainability is essential, such as medicine. A Convolutional Decision Tree (CDT) is a machine learning model for image segmentation. Its graphical structure and simplicity make it easy to interpret, as it clearly shows how pixels in an image are classified in an image segmentation task. This paper proposes new approaches for inducing a CDT to solve the image segmentation problem using SHADE. This adaptive differential evolution algorithm uses a historical memory of successful parameters to guide the optimization process. Experiments were performed using the Weizmann Horse dataset and Blood detection in dark-field microscopy images to compare the proposals in this article with previous results obtained through the traditional differential evolution process. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2023)
Show Figures

Figure 1

24 pages, 789 KiB  
Article
Partitioning Uncertainty in Model Predictions from Compartmental Modeling of Global Carbon Cycle
by Suzan Gazioğlu
Math. Comput. Appl. 2024, 29(4), 47; https://doi.org/10.3390/mca29040047 - 22 Jun 2024
Viewed by 528
Abstract
Our comprehension of the real world remains perpetually incomplete, compelling us to rely on models to decipher intricate real-world phenomena. However, these models, at their pinnacle, serve merely as close approximations of the systems they seek to emulate, inherently laden with uncertainty. Therefore, [...] Read more.
Our comprehension of the real world remains perpetually incomplete, compelling us to rely on models to decipher intricate real-world phenomena. However, these models, at their pinnacle, serve merely as close approximations of the systems they seek to emulate, inherently laden with uncertainty. Therefore, investigating the disparities between observed system behaviors and model-derived predictions is of paramount importance. Although achieving absolute quantification of uncertainty in the model-building process remains challenging, there are avenues for both mitigating and highlighting areas of uncertainty. Central to this study are three key sources of uncertainty, each exerting significant influence: (i) structural uncertainty arising from inadequacies in mathematical formulations within the conceptual models; (ii) scenario uncertainty stemming from our limited foresight or inability to forecast future conditions; and (iii) input factor uncertainty resulting from vaguely defined or estimated input factors. Through uncertainty analysis, this research endeavors to understand these uncertainty domains within compartmental models, which are instrumental in depicting the complexities of the global carbon cycle. The results indicate that parameter uncertainty has the most significant impact on model outputs, followed by structural and scenario uncertainties. Evident deviations between the observed atmospheric CO2 content and simulated data underscore the substantial contribution of certain uncertainties to the overall estimated uncertainty. The conclusions emphasize the need for comprehensive uncertainty quantification to enhance model reliability and the importance of addressing these uncertainties to improve predictions related to global carbon dynamics and inform policy decisions. This paper employs partitioning techniques to discern the contributions of the aforementioned primary sources of uncertainty to the overarching prediction uncertainty. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop