Journal Description
Computation
Computation
is a peer-reviewed journal of computational science and engineering published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), CAPlus / SciFinder, Inspec, dblp, and other databases.
- Journal Rank: JCR - Q2 (Mathematics, Interdisciplinary Applications) / CiteScore - Q1 (Applied Mathematics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 16.7 days after submission; acceptance to publication is undertaken in 5.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.9 (2024);
5-Year Impact Factor:
1.9 (2024)
Latest Articles
Trapped Modes Along Periodic Structures Submerged in a Three-Layer Fluid with a Background Steady Flow
Computation 2025, 13(8), 176; https://doi.org/10.3390/computation13080176 - 22 Jul 2025
Abstract
In this study, we study the trapping of linear water waves by infinite arrays of three-dimensional fixed periodic structures in a three-layer fluid. Each layer has an independent uniform velocity field with respect to the fixed ground in addition to the internal modes
[...] Read more.
In this study, we study the trapping of linear water waves by infinite arrays of three-dimensional fixed periodic structures in a three-layer fluid. Each layer has an independent uniform velocity field with respect to the fixed ground in addition to the internal modes along the interfaces between layers. Dynamical stability between velocity shear and gravitational pull constrains the layer velocities to a neighbourhood of the diagonal in velocity space. A non-linear spectral problem results from the variational formulation. This problem can be linearized, resulting in a geometric condition (from energy minimization) that ensures the existence of trapped modes within the limits set by stability. These modes are solutions living the discrete spectrum that do not radiate energy to infinity. Symmetries reduce the global problem to solutions in the first octant of the three-dimensional velocity space. Examples are shown of configurations of obstacles which satisfy the stability and geometric conditions, depending on the values of the layer velocities. The robustness of the result of the vertical column from previous studies is confirmed in the new configurations. This allows for comparison principles (Cavalieri’s principle, etc.) to be used in determining whether trapped modes are generated.
Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
►
Show Figures
Open AccessArticle
Ionic and Electrotonic Contributions to Short-Term Ventricular Action Potential Memory: An In Silico Study
by
Massimiliano Zaniboni
Computation 2025, 13(7), 175; https://doi.org/10.3390/computation13070175 - 20 Jul 2025
Abstract
►▼
Show Figures
Electrical restitution (ER) is a determinant of cardiac repolarization stability and can be measured as steady action potential (AP) duration (APD) at different pacing rates—the so-called dynamic restitution (ERdyn) curve—or as APD changes after pre- or post-mature stimulations—the so-called standard restitution
[...] Read more.
Electrical restitution (ER) is a determinant of cardiac repolarization stability and can be measured as steady action potential (AP) duration (APD) at different pacing rates—the so-called dynamic restitution (ERdyn) curve—or as APD changes after pre- or post-mature stimulations—the so-called standard restitution (ERs1s2) curve. Short-term AP memory (Ms) has been described as the slope difference between the ERdyn and ERs1s2 curves, and represents the information stored in repolarization dynamics due to previous pacing conditions. Although previous studies have shown its dependence on ion currents and calcium cycling, a systematic picture of these features is lacking. By means of simulations with a human ventricular AP model, I show that APD restitution can be described under randomly changing pacing conditions (ERrand) and Ms derived as the slope difference between ERdyn and ERrand. Thus measured, Ms values correlate with those measured using ERs1s2. I investigate the effect on Ms of modulating the conductance of ion channels involved in AP repolarization, and of abolishing intracellular calcium transient. I show that Ms is chiefly determined by ERdyn rather than ERrand, and that interventions that shorten/prolong APD tend to decrease/increase Ms.
Full article

Figure 1
Open AccessArticle
Decision-Level Multi-Sensor Fusion to Improve Limitations of Single-Camera-Based CNN Classification in Precision Farming: Application in Weed Detection
by
Md. Nazmuzzaman Khan, Adibuzzaman Rahi, Mohammad Al Hasan and Sohel Anwar
Computation 2025, 13(7), 174; https://doi.org/10.3390/computation13070174 - 18 Jul 2025
Abstract
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in
[...] Read more.
The United States leads in corn production and consumption in the world with an estimated USD 50 billion per year. There is a pressing need for the development of novel and efficient techniques aimed at enhancing the identification and eradication of weeds in a manner that is both environmentally sustainable and economically advantageous. Weed classification for autonomous agricultural robots is a challenging task for a single-camera-based system due to noise, vibration, and occlusion. To address this issue, we present a multi-camera-based system with decision-level sensor fusion to improve the limitations of a single-camera-based system in this paper. This study involves the utilization of a convolutional neural network (CNN) that was pre-trained on the ImageNet dataset. The CNN subsequently underwent re-training using a limited weed dataset to facilitate the classification of three distinct weed species: Xanthium strumarium (Common Cocklebur), Amaranthus retroflexus (Redroot Pigweed), and Ambrosia trifida (Giant Ragweed). These weed species are frequently encountered within corn fields. The test results showed that the re-trained VGG16 with a transfer-learning-based classifier exhibited acceptable accuracy (99% training, 97% validation, 94% testing accuracy) and inference time for weed classification from the video feed was suitable for real-time implementation. But the accuracy of CNN-based classification from video feed from a single camera was found to deteriorate due to noise, vibration, and partial occlusion of weeds. Test results from a single-camera video feed show that weed classification accuracy is not always accurate for the spray system of an agricultural robot (AgBot). To improve the accuracy of the weed classification system and to overcome the shortcomings of single-sensor-based classification from CNN, an improved Dempster–Shafer (DS)-based decision-level multi-sensor fusion algorithm was developed and implemented. The proposed algorithm offers improvement on the CNN-based weed classification when the weed is partially occluded. This algorithm can also detect if a sensor is faulty within an array of sensors and improves the overall classification accuracy by penalizing the evidence from a faulty sensor. Overall, the proposed fusion algorithm showed robust results in challenging scenarios, overcoming the limitations of a single-sensor-based system.
Full article
(This article belongs to the Special Issue Moving Object Detection Using Computational Methods and Modeling)
►▼
Show Figures

Figure 1
Open AccessReview
Strategic Decision-Making in SMEs: A Review of Heuristics and Machine Learning for Multi-Objective Optimization
by
Gines Molina-Abril, Laura Calvet, Angel A. Juan and Daniel Riera
Computation 2025, 13(7), 173; https://doi.org/10.3390/computation13070173 - 18 Jul 2025
Abstract
Small- and medium-sized enterprises (SMEs) face dynamic and competitive environments where resilience and data-driven decision-making are critical. Despite the potential benefits of artificial intelligence (AI), machine learning (ML), and optimization techniques, SMEs often struggle to adopt these tools due to high costs, limited
[...] Read more.
Small- and medium-sized enterprises (SMEs) face dynamic and competitive environments where resilience and data-driven decision-making are critical. Despite the potential benefits of artificial intelligence (AI), machine learning (ML), and optimization techniques, SMEs often struggle to adopt these tools due to high costs, limited training, and restricted hardware access. This study reviews how SMEs can employ heuristics, metaheuristics, ML, and hybrid approaches to support strategic decisions under uncertainty and resource constraints. Using bibliometric mapping with UMAP and BERTopic, 82 key works are identified and clustered into 11 thematic areas. From this, the study develops a practical framework for implementing and evaluating optimization strategies tailored to SMEs’ limitations. The results highlight critical application areas, adoption barriers, and success factors, showing that heuristics and hybrid methods are especially effective for multi-objective optimization with lower computational demands. The study also outlines research gaps and proposes future directions to foster digital transformation in SMEs. Unlike prior reviews focused on specific industries or methods, this work offers a cross-sectoral perspective, emphasizing how these technologies can strengthen SME resilience and strategic planning.
Full article
(This article belongs to the Section Computational Social Science)
►▼
Show Figures

Figure 1
Open AccessArticle
Mapping the Intellectual Structure of Computational Risk Analytics in Banking and Finance: A Bibliometric and Thematic Evolution Study
by
Sotirios J. Trigkas, Kanellos Toudas and Ioannis Chasiotis
Computation 2025, 13(7), 172; https://doi.org/10.3390/computation13070172 - 17 Jul 2025
Abstract
Modern financial practices introduce complex risks, which in turn force financial institutions to rely increasingly on computational risk analytics (CRA). The purpose of our research is to attempt to systematically explore the evolution and intellectual structure of CRA in banking using a detailed
[...] Read more.
Modern financial practices introduce complex risks, which in turn force financial institutions to rely increasingly on computational risk analytics (CRA). The purpose of our research is to attempt to systematically explore the evolution and intellectual structure of CRA in banking using a detailed bibliometric analysis of the literature sourced from Web of Science from 2000 to 2025. A comprehensive search in the Web of Science (WoS) Core Collection yielded 1083 peer-reviewed publications, which we analyzed using analytical tools like VOSviewer 1.6.20 and Bibliometrix (Biblioshiny 5.0) so as to examine the dataset and uncover bibliometric characteristics like citation patterns, keyword occurrences, and thematic clustering. Our initial analysis results uncover the presence of key research clusters focusing on bankruptcy prediction, AI integration in financial services, and advanced deep learning applications. Furthermore, our findings note a transition of CRA from an emerging to an expanding domain, especially after 2019, with terms like machine learning (ML), artificial intelligence (AI), and deep learning (DL) being identified as prominent keywords and a recent shift towards blockchain, explainability, and financial stability being present. We believe that this study tries to address the need for an updated mapping of CRA, providing valuable insights for future academic inquiry and practical financial risk management applications.
Full article
(This article belongs to the Special Issue Modern Applications for Computational Methods in Applied Economics and Business Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Construction and Evaluation of a Domain-Related Risk Model for Prognosis Prediction in Colorectal Cancer
by
Xiangjun Cui, Yongqiang Xing, Guoqing Liu, Hongyu Zhao and Zhenhua Yang
Computation 2025, 13(7), 171; https://doi.org/10.3390/computation13070171 - 17 Jul 2025
Abstract
Background: Epigenomic instability accelerates mutations in tumor suppressor genes and oncogenes, contributing to malignant transformation. Histone modifications, particularly methylation and acetylation, significantly influence tumor biology, with chromo-, bromo-, and Tudor domain-containing proteins mediating these changes. This study investigates how genes encoding these domain-containing
[...] Read more.
Background: Epigenomic instability accelerates mutations in tumor suppressor genes and oncogenes, contributing to malignant transformation. Histone modifications, particularly methylation and acetylation, significantly influence tumor biology, with chromo-, bromo-, and Tudor domain-containing proteins mediating these changes. This study investigates how genes encoding these domain-containing proteins affect colorectal cancer (CRC) prognosis. Methods: Using CRC data from the GSE39582 and TCGA datasets, we identified domain-related genes via GeneCards and developed a prognostic signature using LASSO-COX regression. Patients were classified into high- and low-risk groups, and comparisons were made across survival, clinical features, immune cell infiltration, immunotherapy responses, and drug sensitivity predictions. Single-cell analysis assessed gene expression in different cell subsets. Results: Four domain-related genes (AKAP1, ORC1, CHAF1A, and UHRF2) were identified as a prognostic signature. Validation confirmed their prognostic value, with significant differences in survival, clinical features, immune patterns, and immunotherapy responses between the high- and low-risk groups. Drug sensitivity analysis revealed top candidates for CRC treatment. Single-cell analysis showed varied expression of these genes across cell subsets. Conclusions: This study presents a novel prognostic signature based on domain-related genes that can predict CRC severity and offer insights into immune dynamics, providing a promising tool for personalized risk assessment in CRC.
Full article
(This article belongs to the Special Issue Integrative Computational Methods for Second-and Third-Generation Sequencing Data)
►▼
Show Figures

Figure 1
Open AccessArticle
First-Principles Insights into Mo and Chalcogen Dopant Positions in Anatase, TiO2
by
W. A. Chapa Pamodani Wanniarachchi, Ponniah Vajeeston, Talal Rahman and Dhayalan Velauthapillai
Computation 2025, 13(7), 170; https://doi.org/10.3390/computation13070170 - 14 Jul 2025
Abstract
This study employs density functional theory (DFT) to investigate the electronic and optical properties of molybdenum (Mo) and chalcogen (S, Se, Te) co-doped anatase TiO2. Two co-doping configurations were examined: Model 1, where the dopants are adjacent, and Model 2, where
[...] Read more.
This study employs density functional theory (DFT) to investigate the electronic and optical properties of molybdenum (Mo) and chalcogen (S, Se, Te) co-doped anatase TiO2. Two co-doping configurations were examined: Model 1, where the dopants are adjacent, and Model 2, where the dopants are farther apart. The incorporation of Mo into anatase TiO2 resulted in a significant bandgap reduction, lowering it from 3.22 eV (pure TiO2) to range of 2.52–0.68 eV, depending on the specific doping model. The introduction of Mo-4d states below the conduction band led to a shift in the Fermi level from the top of the valence band to the bottom of the conduction band, confirming the n-type doping characteristics of Mo in TiO2. Chalcogen doping introduced isolated electronic states from Te-5p, S-3p, and Se-4p located above the valence band maximum, further reducing the bandgap. Among the examined configurations, Mo–S co-doping in Model 1 exhibited most optimal structural stability structure with the fewer impurity states, enhancing photocatalytic efficiency by reducing charge recombination. With the exception of Mo–Te co-doping, all co-doped systems demonstrated strong oxidation power under visible light, making Mo-S and Mo-Se co-doped TiO2 promising candidates for oxidation-driven photocatalysis. However, their limited reduction ability suggests they may be less suitable for water-splitting applications. The study also revealed that dopant positioning significantly influences charge transfer and optoelectronic properties. Model 1 favored localized electron density and weaker magnetization, while Model 2 exhibited delocalized charge density and stronger magnetization. These findings underscore the critical role of dopant arrangement in optimizing TiO2-based photocatalysts for solar energy applications.
Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
►▼
Show Figures

Figure 1
Open AccessReview
Mathematical Optimization in Machine Learning for Computational Chemistry
by
Ana Zekić
Computation 2025, 13(7), 169; https://doi.org/10.3390/computation13070169 - 11 Jul 2025
Abstract
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for
[...] Read more.
Machine learning (ML) is transforming computational chemistry by accelerating molecular simulations, property prediction, and inverse design. Central to this transformation is mathematical optimization, which underpins nearly every stage of model development, from training neural networks and tuning hyperparameters to navigating chemical space for molecular discovery. This review presents a structured overview of optimization techniques used in ML for computational chemistry, including gradient-based methods (e.g., SGD and Adam), probabilistic approaches (e.g., Monte Carlo sampling and Bayesian optimization), and spectral methods. We classify optimization targets into model parameter optimization, hyperparameter selection, and molecular optimization and analyze their application across supervised, unsupervised, and reinforcement learning frameworks. Additionally, we examine key challenges such as data scarcity, limited generalization, and computational cost, outlining how mathematical strategies like active learning, meta-learning, and hybrid physics-informed models can address these issues. By bridging optimization methodology with domain-specific challenges, this review highlights how tailored optimization strategies enhance the accuracy, efficiency, and scalability of ML models in computational chemistry.
Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Open AccessArticle
Simultaneous Multi-Objective and Topology Optimization: Effect of Mesh Refinement and Number of Iterations on Computational Cost
by
Daniel Miler, Matija Hoić, Rudolf Tomić, Andrej Jokić and Robert Mašović
Computation 2025, 13(7), 168; https://doi.org/10.3390/computation13070168 - 11 Jul 2025
Abstract
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes,
[...] Read more.
In this study, a multi-objective optimization procedure with embedded topology optimization was presented. The procedure simultaneously optimizes the spatial arrangement and topology of bodies in a multi-body system. The multi-objective algorithm determines the locations of supports, joints, active loads, reactions, and load magnitudes, which serve as inputs for the topology optimization of each body. The multi-objective algorithm dynamically adjusts domain size, support locations, and load magnitudes during optimization. Due to repeated topology optimization calls within the genetic algorithm, the computational cost is significant. To address this, two reduction strategies are proposed: (I) using a coarser mesh and (II) reducing the number of iterations during the initial generations. As optimization progresses, Strategy I gradually refines the mesh, while Strategy II increases the maximum allowable iteration count. The effectiveness of both strategies is evaluated against a baseline (Reference) without reductions. By the 25th generation, all approaches achieve similar hypervolume values (Reference: 2.181; I: 2.112; II: 2.133). The computation time is substantially reduced (Reference: 42,226 s; I: 16,814 s; II: 21,674 s), demonstrating that both strategies effectively accelerate optimization without compromising solution quality.
Full article
(This article belongs to the Special Issue Advanced Topology Optimization: Methods and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Useful Results for the Qualitative Analysis of Generalized Hattaf Mixed Fractional Differential Equations with Applications to Medicine
by
Khalid Hattaf
Computation 2025, 13(7), 167; https://doi.org/10.3390/computation13070167 - 10 Jul 2025
Abstract
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of
[...] Read more.
Most solutions of fractional differential equations (FDEs) that model real-world phenomena in various fields of science, industry, and engineering are complex and cannot be solved analytically. This paper mainly aims to present some useful results for studying the qualitative properties of solutions of FDEs involving the new generalized Hattaf mixed (GHM) fractional derivative, which encompasses many types of fractional operators with both singular and non-singular kernels. In addition, this study also aims to unify and generalize existing results under a broader operator. Furthermore, the obtained results are applied to some linear systems arising from medicine.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Some Secret Sharing Based on Hyperplanes
by
Guohui Wang and Yucheng Chen
Computation 2025, 13(7), 166; https://doi.org/10.3390/computation13070166 - 10 Jul 2025
Abstract
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures
[...] Read more.
The secret sharing schemes (SSS) are widely used in secure multi-party computing and distributed computing, and the access structure is the key to constructing secret sharing schemes. In this paper, we propose a method for constructing access structures based on hyperplane combinatorial structures over finite fields. According to the given access structure, the corresponding secret sharing scheme that can identify cheaters is given. This scheme enables the secret to be correctly restored if the cheater does not exceed the threshold, and the cheating behavior can be detected and located.
Full article
Open AccessArticle
A Segmented Linear Regression Study of Seasonal Profiles of COVID-19 Deaths in Italy: September 2021–September 2024
by
Marco Roccetti and Eugenio Maria De Rosa
Computation 2025, 13(7), 165; https://doi.org/10.3390/computation13070165 - 9 Jul 2025
Abstract
►▼
Show Figures
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we
[...] Read more.
Using a segmented linear regression model, we examined the seasonal profiles of weekly COVID-19 deaths data in Italy over a three-year-long period during which the SARS-CoV-2 Omicron and post-Omicron variants were predominant (September 2021–September 2024). Comparing the slopes of the regression segments, we were able to discuss the variation in steepness of the Italian COVID-19 mortality trend, identifying the corresponding growth/decline profile for each considered season. Our findings show that, although the COVID-19 weekly death mortality has been in a declining trend in Italy since the end of 2021 until the end of 2024, there have been increasing alterations in the COVID-19 deaths for all winters and summers of that period. These increasing mortality variations were more pronounced in winters than in summers, with an average progressive increase in the number of COVID-19 deaths, with each new week, of 55.75 and 22.90, in winters and in summers, respectively. We found that COVID-19 deaths were, instead, less frequent in the intermediate periods between winters and summers, with an average decrease of −38.01 COVID-19 deaths for each new week. Our segmented regression model has fitted well the observed COVID-19 deaths, as confirmed by the average value of the determination coefficients: 0.74, 0.63 and 0.70, respectively, for winters, summers and intermediate periods. In conclusion, favored by a general declining COVID-19 mortality trend in Italy in the period of interest, transient rises of the mortality have occurred both in winters and in summers, but received little attention because they have always been compensated by consistent downward drifts occurring during the intermediate periods between winters and summers.
Full article

Figure 1
Open AccessArticle
Lattice Boltzmann Framework for Multiphase Flows by Eulerian–Eulerian Navier–Stokes Equations
by
Matteo Maria Piredda and Pietro Asinari
Computation 2025, 13(7), 164; https://doi.org/10.3390/computation13070164 - 9 Jul 2025
Abstract
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian
[...] Read more.
Although the lattice Boltzmann method (LBM) is relatively straightforward, it demands a well-crafted framework to handle the complex partial differential equations involved in multiphase flow simulations. For the first time to our knowledge, this work proposes a novel LBM framework to solve Eulerian–Eulerian multiphase flow equations without any finite difference correction, including very-large-density ratios and also a realistic relation for the drag coefficient. The proposed methodology and all reported LBM formulas can be applied to any dimension. This opens a promising venue for simulating multiphase flows in large High Performance Computing (HPC) facilities and on novel parallel hardware. This LBM framework consists of six coupled LBM schemes—running on the same lattice—ensuring an efficient implementation in large codes with minimum effort. The preliminary numeral results agree in an excellent way with the reference numerical solution obtained by a traditional finite difference solver.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Simulation of Cytokinesis Hydrodynamics
by
Andriy A. Avramenko, Igor V. Shevchuk, Andrii I. Tyrinov and Iryna V. Dzevulska
Computation 2025, 13(7), 163; https://doi.org/10.3390/computation13070163 - 8 Jul 2025
Abstract
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model
[...] Read more.
A hydrodynamic homogeneous model has been developed for the motion of mutually impenetrable viscoelastic non-Newtonian fluids taking into account surface tension forces. Based on this model, numerical simulations of cytokinesis hydrodynamics were performed. The cytoplasm is considered a non-Newtonian viscoelastic fluid. The model allows for the calculation of the formation and rupture of the intercellular bridge. Results from an analytical analysis shed light on the influence of the viscoelastic fluid’s relaxation time on cytokinesis dynamics. A comparison of numerical simulation results and experimental data showed satisfactory agreement.
Full article
(This article belongs to the Section Computational Biology)
►▼
Show Figures

Figure 1
Open AccessArticle
Robust Trajectory Tracking Fault-Tolerant Control for Quadrotor UAVs Based on Adaptive Sliding Mode and Fault Estimation
by
Yukai Wu, Guobi Ling and Yaoke Shi
Computation 2025, 13(7), 162; https://doi.org/10.3390/computation13070162 - 7 Jul 2025
Abstract
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel
[...] Read more.
This paper presents a composite disturbance-tolerant control framework for quadrotor unmanned aerial vehicles (UAVs). By constructing an enhanced dynamic model that incorporates parameter uncertainties, external disturbances, and actuator faults and considering the inherent underactuated and highly coupled characteristics of the UAV, a novel robust adaptive sliding mode controller (RASMC) is designed. The controller adopts a hierarchical adaptive mechanism and utilizes a dual-loop composite adaptive law to achieve the online estimation of system parameters and fault information. Using the Lyapunov method, the asymptotic stability of the closed-loop system is rigorously proven. Simulation results demonstrate that, under the combined effects of external disturbances and actuator faults, the RASMC effectively suppresses position errors (<0.05 m) and attitude errors (<0.02 radians), significantly outperforming traditional ADRC and LQR control methods. Further analysis shows that the proposed adaptive law enables the precise online estimation of aerodynamic coefficients and disturbance boundaries during actual flights, with estimation errors controlled within ±10%. Moreover, compared to ADRC and LQR, RASMC reduces the settling time by more than 50% and the tracking overshoot by over 70% while using the ( ) approximation to eliminate chattering. Prototype experiments validate the fact that the method achieves centimeter-level trajectory tracking under real uncertainties, demonstrating the superior performance and robustness of the control framework in complex flight missions.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
POTMEC: A Novel Power Optimization Technique for Mobile Edge Computing Networks
by
Tamilarasan Ananth Kumar, Rajendirane Rajmohan, Sunday Adeola Ajagbe, Oluwatobi Akinlade and Matthew Olusegun Adigun
Computation 2025, 13(7), 161; https://doi.org/10.3390/computation13070161 - 7 Jul 2025
Abstract
►▼
Show Figures
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware
[...] Read more.
The rapid growth of ultra-dense mobile edge computing (UDEC) in 5G IoT networks has intensified energy inefficiencies and latency bottlenecks exacerbated by dynamic channel conditions and imperfect CSI in real-world deployments. This paper introduces POTMEC, a power optimization framework that combines a channel-aware adaptive power allocator using real-time SNR measurements, a MATLAB-trained RL model for joint offloading decisions and a decaying step-size algorithm guaranteeing convergence. Computational offloading is a productive technique to overcome mobile battery life issues by processing a few parts of the mobile application on the cloud. It investigated how multi-access edge computing can reduce latency and energy usage. The experiments demonstrate that the proposed model reduces transmission energy consumption by 27.5% compared to baseline methods while maintaining the latency below 15 ms in ultra-dense scenarios. The simulation results confirm a 92% accuracy in near-optimal offloading decisions under dynamic channel conditions. This work advances sustainable edge computing by enabling energy-efficient IoT deployments in 5G ultra-dense networks without compromising QoS.
Full article

Figure 1
Open AccessArticle
Successful Management of Public Health Projects Driven by AI in a BANI Environment
by
Sergiy Bushuyev, Natalia Bushuyeva, Ivan Nekrasov and Igor Chumachenko
Computation 2025, 13(7), 160; https://doi.org/10.3390/computation13070160 - 4 Jul 2025
Abstract
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the
[...] Read more.
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the resilience and effectiveness of public health interventions under such conditions. By integrating a coupled SEIR–Infodemic–Panicdemic Model with war-specific factors, we simulate the interplay of infectious disease spread, misinformation dissemination, and panic dynamics over 1500 days in a Ukrainian city (Kharkiv). The model incorporates time-varying parameters to account for population displacement, healthcare disruptions, and periodic war events, reflecting the evolving conflict context. Sensitivity and risk–opportunity analyses reveal that disease transmission, misinformation, and infrastructure damage significantly exacerbate epidemic peaks, while AI-enabled interventions, such as fact-checking, mental health support, and infrastructure recovery, offer substantial mitigation potential. Qualitative assessments identify technical, organisational, ethical, regulatory, and military risks, alongside opportunities for predictive analytics, automation, and equitable healthcare access. Quantitative simulations demonstrate that risks, like increased displacement, can amplify infectious peaks by up to 28.3%, whereas opportunities, like enhanced fact-checking, can reduce misinformation by 18.2%. These findings provide a roadmap for leveraging AI to navigate BANI environments, offering actionable insights for public health practitioners in Ukraine and other crisis settings. The study underscores AI’s transformative role in fostering adaptive, data-driven strategies to achieve sustainable health outcomes amidst volatility and uncertainty.
Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
An Application of Deep Learning Models for the Detection of Cocoa Pods at Different Ripening Stages: An Approach with Faster R-CNN and Mask R-CNN
by
Juan Felipe Restrepo-Arias, María José Montoya-Castaño, María Fernanda Moreno-De La Espriella and John W. Branch-Bedoya
Computation 2025, 13(7), 159; https://doi.org/10.3390/computation13070159 - 2 Jul 2025
Abstract
►▼
Show Figures
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates
[...] Read more.
The accurate classification of cocoa pod ripeness is critical for optimizing harvest timing, improving post-harvest processing, and ensuring consistent quality in chocolate production. Traditional ripeness assessment methods are often subjective, labor-intensive, or destructive, highlighting the need for automated, non-invasive solutions. This study evaluates the performance of R-CNN-based deep learning models—Faster R-CNN and Mask R-CNN—for the detection and segmentation of cocoa pods across four ripening stages (0–2 months, 2–4 months, 4–6 months, and >6 months) using the RipSetCocoaCNCH12 dataset, which is publicly accessible, comprising 4116 labeled images collected under real-world field conditions, in the context of precision agriculture. Initial experiments using pretrained weights and standard configurations on a custom COCO-format dataset yielded promising baseline results. Faster R-CNN achieved a mean average precision (mAP) of 64.15%, while Mask R-CNN reached 60.81%, with the highest per-class precision in mature pods (C4) but weaker detection in early stages (C1). To improve model robustness, the dataset was subsequently augmented and balanced, followed by targeted hyperparameter optimization for both architectures. The refined models were then benchmarked against state-of-the-art YOLOv8 networks (YOLOv8x and YOLOv8l-seg). Results showed that YOLOv8x achieved the highest mAP of 86.36%, outperforming YOLOv8l-seg (83.85%), Mask R-CNN (73.20%), and Faster R-CNN (67.75%) in overall detection accuracy. However, the R-CNN models offered valuable instance-level segmentation insights, particularly in complex backgrounds. Furthermore, a qualitative evaluation using confidence heatmaps and error analysis revealed that R-CNN architectures occasionally missed small or partially occluded pods. These findings highlight the complementary strengths of region-based and real-time detectors in precision agriculture and emphasize the need for class-specific enhancements and interpretability tools in real-world deployments.
Full article

Figure 1
Open AccessArticle
Enhancing DDoS Attacks Mitigation Using Machine Learning and Blockchain-Based Mobile Edge Computing in IoT
by
Mahmoud Chaira, Abdelkader Belhenniche and Roman Chertovskih
Computation 2025, 13(7), 158; https://doi.org/10.3390/computation13070158 - 1 Jul 2025
Abstract
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures.
[...] Read more.
The widespread adoption of Internet of Things (IoT) devices has been accompanied by a remarkable rise in both the frequency and intensity of Distributed Denial of Service (DDoS) attacks, which aim to overwhelm and disrupt the availability of networked systems and connected infrastructures. In this paper, we present a novel approach to DDoS attack detection and mitigation that integrates state-of-the-art machine learning techniques with Blockchain-based Mobile Edge Computing (MEC) in IoT environments. Our solution leverages the decentralized and tamper-resistant nature of Blockchain technology to enable secure and efficient data collection and processing at the network edge. We evaluate multiple machine learning models, including K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Transformer architectures, and LightGBM, using the CICDDoS2019 dataset. Our results demonstrate that Transformer models achieve a superior detection accuracy of 99.78%, while RF follows closely with 99.62%, and LightGBM offers optimal efficiency for real-time detection. This integrated approach significantly enhances detection accuracy and mitigation effectiveness compared to existing methods, providing a robust and adaptive mechanism for identifying and mitigating malicious traffic patterns in IoT environments.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Open AccessArticle
Numerical Modeling of Electromagnetic Modes in a Planar Stratified Medium with a Graphene Interface
by
Eugen Smolkin
Computation 2025, 13(7), 157; https://doi.org/10.3390/computation13070157 - 1 Jul 2025
Abstract
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface.
[...] Read more.
Graphene interfaces in layered dielectrics can support unique electromagnetic modes, but analyzing these modes requires robust computational techniques. This work presents a numerical method for computing TE-polarized eigenmodes in a planar stratified dielectric slab with an infinitesimally thin graphene sheet at its interface. The governing boundary-value problem is reformulated as coupled initial-value problems and solved via a customized shooting method, enabling accurate calculation of complex propagation constants and field profiles despite the discontinuity at the graphene layer. We demonstrate that the graphene significantly alters the modal spectrum, introducing complex leaky and surface waves with attenuation due to graphene’s conductivity. Numerical results illustrate how the layers’ inhomogeneity and the graphene’s surface conductivity influence mode confinement and loss. These findings confirm the robustness of the proposed computational approach and provide insights relevant to the design and analysis of graphene-based waveguiding devices.
Full article
(This article belongs to the Section Computational Engineering)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computation, Entropy, J. Imaging, Optics
Color Image Processing: Models and Methods (CIP: MM)
Topic Editors: Giuliana Ramella, Isabella TorcicolloDeadline: 30 July 2025
Topic in
Algorithms, Computation, Mathematics, Molecules, Symmetry, Nanomaterials, Materials
Advances in Computational Materials Sciences
Topic Editors: Cuiying Jian, Aleksander CzekanskiDeadline: 30 September 2025
Topic in
AppliedMath, Axioms, Computation, Mathematics, Symmetry
A Real-World Application of Chaos Theory
Topic Editors: Adil Jhangeer, Mudassar ImranDeadline: 28 February 2026
Topic in
Axioms, Computation, Fractal Fract, Mathematics, Symmetry
Fractional Calculus: Theory and Applications, 2nd Edition
Topic Editors: António Lopes, Liping Chen, Sergio Adriani David, Alireza AlfiDeadline: 30 May 2026

Conferences
Special Issues
Special Issue in
Computation
Application of Biomechanical Modeling and Simulation
Guest Editor: Luis Pastor Sánchez-FernándezDeadline: 31 July 2025
Special Issue in
Computation
Computational Approaches for Manufacturing
Guest Editor: Lichao FangDeadline: 30 September 2025
Special Issue in
Computation
Applications of Intelligent Computing and Modeling in Construction Engineering
Guest Editors: Jerzy Rosłon, Michał Podolski, Bartłomiej SrokaDeadline: 30 September 2025
Special Issue in
Computation
Feature Papers in Computational Chemistry
Guest Editors: Alexander Novikov, Felipe FantuzziDeadline: 30 September 2025