Next Issue
Volume 13, September
Previous Issue
Volume 13, July
 
 

Computation, Volume 13, Issue 8 (August 2025) – 27 articles

Cover Story (view full-size image): This study presents a novel computational approach for analyzing the internal flow behavior of TSC using SolidWorks Flow Simulation. Unlike specialized CFD tools, SolidWorks provides an accessible CAD-integrated environment that enables both design and detailed transient flow analysis. A 5/6 lobe model was developed to simulate the working phases, capturing fluid trajectories, pressure and temperature variations, leakage, and torque fluctuations, all visually represented with high clarity. Results reveal critical insights into leakage flows, reverse flow phenomena, and thermal effects, aligning well with studied literature. By combining accuracy with simplicity, this approach not only aids TSC design and optimization but also offers a powerful tool for education and engineering practice. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
84 pages, 1806 KB  
Article
A Method for the Solution of Certain Non-Linear Problems of Combined Seagoing Main Engine Performance and Fixed-Pitch Propeller Hydrodynamics with Imperative Assignment Statements and Streamlined Computational Sequences
by Eleutherios Christos Andritsakis
Computation 2025, 13(8), 202; https://doi.org/10.3390/computation13080202 - 21 Aug 2025
Viewed by 502
Abstract
Seagoing marine propulsion analysis in terms of main engine performance and fixed-pitch propeller hydrodynamics is an engineering problem that has not been exactly defined to date. This study utilizes an original and comprehensive mathematical approach—involving the approximate representation of one function by another—to [...] Read more.
Seagoing marine propulsion analysis in terms of main engine performance and fixed-pitch propeller hydrodynamics is an engineering problem that has not been exactly defined to date. This study utilizes an original and comprehensive mathematical approach—involving the approximate representation of one function by another—to define this problem in mathematical terms and solve it. This is achieved by imperatively applying an original and sophisticated hybrid combination of an existing, formidable and ingenious, mathematical methodology with different original comprehensive functional systems. These original functional systems approximately represent the operations of vessels under seagoing conditions, including the thermo-fluid and frictional processes of vessels’ main engines in terms of fuel oil consumption, as well as the hydrodynamic performance of the respective vessels in terms of the shaft propulsion power and the rotational speed of the fixed-pitch propellers driven by these engines. Based on the least-squares criterion, this original and sophisticated hybrid combination systematically attains remarkably close approximate representations under seagoing conditions. Apart from this novel exact definition in mathematical terms and the significance of the above original representations, this combination is also applicable for the approximation of the baselines demarcating the standard engineering context representing the ideal reference (sea trials) conditions, from the seagoing conditions. Full article
Show Figures

Figure 1

36 pages, 1871 KB  
Article
Sentiment-Driven Statistical Modelling of Stock Returns over Weekends
by Pablo Kowalski Kutz and Roman N. Makarov
Computation 2025, 13(8), 201; https://doi.org/10.3390/computation13080201 - 21 Aug 2025
Viewed by 801
Abstract
We propose a two-stage statistical learning framework to investigate how financial news headlines posted over weekends affect stock returns. In the first stage, Natural Language Processing (NLP) techniques are used to extract sentiment features from news headlines, including FinBERT sentiment scores and Impact [...] Read more.
We propose a two-stage statistical learning framework to investigate how financial news headlines posted over weekends affect stock returns. In the first stage, Natural Language Processing (NLP) techniques are used to extract sentiment features from news headlines, including FinBERT sentiment scores and Impact Probabilities derived from Logistic Regression models (Binomial, Multinomial, and Bayesian). These Impact Probabilities estimate the likelihood that a given headline influences the stock’s opening price on the following trading day. In the second stage, we predict over-weekend log returns using various sets of covariates: sentiment-based features, traditional financial indicators (e.g., trading volumes, past returns), and headline counts. We evaluate multiple statistical learning algorithms—including Linear Regression, Polynomial Regression, Random Forests, and Support Vector Machines—using cross-validation and two performance metrics. Our framework is demonstrated using financial news from MarketWatch and stock data for Apple Inc. (AAPL) from 2014 to 2023. The results show that incorporating sentiment features, particularly Impact Probabilities, improves predictive accuracy. This approach offers a robust way to quantify and model the influence of qualitative financial information on stock performance, especially in contexts where markets are closed but news continues to develop. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

9 pages, 250 KB  
Communication
Kirchhoff’s Current Law: A Derivation from Maxwell’s Equations
by Robert S. Eisenberg
Computation 2025, 13(8), 200; https://doi.org/10.3390/computation13080200 - 19 Aug 2025
Viewed by 462
Abstract
Kirchhoff’s current law was originally derived for systems such as telegraphs that switch in 0.1 s. It is used widely today to design circuits in computers that switch in ~0.1 nanoseconds, one billion times faster. Current behaves differently in one second and one-tenth [...] Read more.
Kirchhoff’s current law was originally derived for systems such as telegraphs that switch in 0.1 s. It is used widely today to design circuits in computers that switch in ~0.1 nanoseconds, one billion times faster. Current behaves differently in one second and one-tenth of a nanosecond. A derivation of a current law from the fundamental equations of electrodynamics—the Maxwell equations—is needed. Here is a derivation in one line: div curlB/μ0=0=divJ+(εr1)ε0E/t+ε0E/t=divJtotal. Maxwell’s ‘true’ current is defined as Jtotal. The universal displacement current found everywhere is ε0E/t. The conduction current J is carried by any charge with mass, no matter how small, brief, or transient, driven by any source, e.g., diffusion. The second term (εr1)ε0E/t is the usual approximation to the polarization currents of ideal dielectrics. The dielectric constant εr  is a dimensionless real number. Real dielectrics can be very complicated. They require a complete theory of polarization to replace the (εr1)ε0E/t term. The Maxwell current law divJtotal=0 defines the solenoidal field of total current that has zero divergence, typically characterized in two dimensions by streamlines that end where they begin, flowing in loops that form circuits. Note that the conduction current J is not solenoidal. Conduction current J accumulates significantly in many chemical and biological applications. Total current Jtotal does not accumulate in any time interval or in any circumstance where the Maxwell equations are valid. Jtotal does not accumulate during the transitions of electrons from orbital to orbital within a chemical reaction, for example. Jtotal should be included in chemical reaction kinetics. The classical Kirchhoff current law div J=0 is an approximation used to analyze idealized topological circuits found in textbooks. The classical Kirchhoff current law is shown here by mathematics to be valid only when Jε0E/t, typically in the steady state. The Kirchhoff current law is often extended to much shorter times to help topological circuits approximate some of the displacement currents not found in the classical Kirchhoff current law. The original circuit is modified. Circuit elements—invented or redefined—are added to the topological circuit for that purpose. Full article
(This article belongs to the Section Computational Engineering)
17 pages, 2931 KB  
Article
Comparative Analysis of Wavelet Bases for Solving First-Kind Fredholm Integral Equations
by Nurlan Temirbekov, Dinara Tamabay, Aigerim Tleulesova and Tomiris Mukhanova
Computation 2025, 13(8), 199; https://doi.org/10.3390/computation13080199 - 18 Aug 2025
Viewed by 281
Abstract
This research presents a comparative analysis of numerical methods for solving first-kind Fredholm integral equations using the Bubnov–Galerkin method with various wavelet and orthogonal polynomial bases. The bases considered are constructed from Legendre, Laguerre, Chebyshev, and Hermite wavelets, as well as Alpert multiwavelets [...] Read more.
This research presents a comparative analysis of numerical methods for solving first-kind Fredholm integral equations using the Bubnov–Galerkin method with various wavelet and orthogonal polynomial bases. The bases considered are constructed from Legendre, Laguerre, Chebyshev, and Hermite wavelets, as well as Alpert multiwavelets and CAS wavelets. The effectiveness of these bases is evaluated by measuring errors relative to known analytical solutions at different discretization levels. Results show that global orthogonal systems—particularly the Chebyshev and Hermite—achieve the lowest error norms for smooth target functions. CAS wavelets, due to their localized and oscillatory nature, produce higher errors, though their accuracy improves with finer discretization. The analysis has been extended to incorporate perturbations in the form of additive noise, enabling a rigorous assessment of the method’s stability with respect to different wavelet bases. This approach provides insight into the robustness of the numerical scheme under data uncertainty and highlights the sensitivity of each basis to noise-induced errors. Full article
Show Figures

Figure 1

47 pages, 2068 KB  
Review
Factors, Prediction, Explainability, and Simulating University Dropout Through Machine Learning: A Systematic Review, 2012–2024
by Mauricio Quimiz-Moreira, Rosa Delgadillo, Jorge Parraga-Alava, Nelson Maculan and David Mauricio
Computation 2025, 13(8), 198; https://doi.org/10.3390/computation13080198 - 12 Aug 2025
Viewed by 1364
Abstract
College dropout represents a significant challenge for universities, and despite advances in machine learning technologies, predicting dropout remains a complex task. This literature review focuses on investigating the factors that influence college dropout, examining the models used to predict it, and highlighting the [...] Read more.
College dropout represents a significant challenge for universities, and despite advances in machine learning technologies, predicting dropout remains a complex task. This literature review focuses on investigating the factors that influence college dropout, examining the models used to predict it, and highlighting the most significant advances in explainability and simulation over the period 2012 to 2024 using the PRISMA methodology. They identified 520 factors in five categories (demographic, socioeconomic, institutional, personal, and academic), with the most studied factors in each category being, respectively, gender, scholarships, infrastructure, student identification, and grades. They also identified 83 machine learning models, with the most studied being the decision tree, logistic regression, and random forest. In addition, eight explanatory models were identified, with SHAP and LIME being the most widely used. Finally, no simulation models related to university dropout were identified. This study groups factors related to university dropout into key models for prediction and analyzes the methods used to explain the causal factors that influence university student dropout. Full article
Show Figures

Figure 1

21 pages, 1640 KB  
Article
Cross-View Heterogeneous Graph Contrastive Learning Method for Healthy Food Recommendation
by Huacheng Zhao, Hao Chen, Jianxin Wang and Yeru Wang
Computation 2025, 13(8), 197; https://doi.org/10.3390/computation13080197 - 12 Aug 2025
Viewed by 457
Abstract
Exploring food’s rich composition and nutritional information is crucial for understanding and improving people’s dietary preferences and health habits. However, most existing food recommendation models tend to overlook the impact of food choices on health. Moreover, due to the high sparsity of food-related [...] Read more.
Exploring food’s rich composition and nutritional information is crucial for understanding and improving people’s dietary preferences and health habits. However, most existing food recommendation models tend to overlook the impact of food choices on health. Moreover, due to the high sparsity of food-related data, most existing methods fail to effectively leverage the multi-dimensional information of food, resulting in poorly learned node embeddings. Considering these factors, we propose a cross-view contrastive heterogeneous-graph learning method for healthy food recommendation (CGHF). Specifically, CGHF constructs feature relation graphs and heterogeneous information connection graphs by integrating user–food interaction data and multi-dimensional information about food. We then design a cross-view contrastive learning task to learn node embeddings from multiple views collaboratively. Additionally, we introduce a meta-path-based local aggregation mechanism to aggregate node information in local subgraphs, thus allowing for the efficient capturing of users’ dietary preferences. Experimental comparisons with various advanced models demonstrate the effectiveness of the proposed model. Full article
Show Figures

Figure 1

19 pages, 302 KB  
Article
Beyond Traditional Classifiers: Evaluating Large Language Models for Robust Hate Speech Detection
by Basel Barakat and Sardar Jaf
Computation 2025, 13(8), 196; https://doi.org/10.3390/computation13080196 - 10 Aug 2025
Viewed by 877
Abstract
Hate speech detection remains a significant challenge due to the nuanced and context-dependent nature of hateful language. Traditional classifiers, trained on specialized corpora, often struggle to accurately identify subtle or manipulated hate speech. This paper explores the potential of utilizing large language models [...] Read more.
Hate speech detection remains a significant challenge due to the nuanced and context-dependent nature of hateful language. Traditional classifiers, trained on specialized corpora, often struggle to accurately identify subtle or manipulated hate speech. This paper explores the potential of utilizing large language models (LLMs) to address these limitations. By leveraging their extensive training on diverse texts, LLMs demonstrate a superior ability to understand context, which is crucial for effective hate speech detection. We conduct a comprehensive evaluation of various LLMs on both binary and multi-label hate speech datasets to assess their performance. Our findings aim to clarify the extent to which LLMs can enhance hate speech classification accuracy, particularly in complex and challenging cases. Full article
Show Figures

Graphical abstract

14 pages, 1127 KB  
Article
A Quantitative Structure–Activity Relationship Study of the Anabolic Activity of Ecdysteroids
by Durbek Usmanov, Ugiloy Yusupova, Vladimir Syrov, Gerardo M. Casanola-Martin and Bakhtiyor Rasulev
Computation 2025, 13(8), 195; https://doi.org/10.3390/computation13080195 - 10 Aug 2025
Viewed by 605
Abstract
Phytoecdysteroids represent a class of naturally occurring substances known for their diverse biological functions, particularly their strong ability to stimulate protein anabolism. In this study, a computational machine learning-driven quantitative structure–activity relationship (QSAR) approach was applied to analyze the anabolic potential of 23 [...] Read more.
Phytoecdysteroids represent a class of naturally occurring substances known for their diverse biological functions, particularly their strong ability to stimulate protein anabolism. In this study, a computational machine learning-driven quantitative structure–activity relationship (QSAR) approach was applied to analyze the anabolic potential of 23 ecdysteroid compounds. The ML-based QSAR modeling was conducted using a combined approach that integrates Genetic Algorithm-based feature selection with Multiple Linear Regression Analysis (GA-MLRA). Additionally, structure optimization by semi-empirical quantum-chemical method was employed to determine the most stable molecular conformations and to calculate an additional set of structural and electronic descriptors. The most effective QSAR models for describing the anabolic activity of the investigated ecdysteroids were developed and validated. The proposed best model demonstrates both strong statistical relevance and high predictive performance. The predictive performance of the resulting models was confirmed by an external test set based on R2test values, which were within the range of 0.89 to 0.97. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Figure 1

19 pages, 821 KB  
Article
Multimodal Multisource Neural Machine Translation: Building Resources for Image Caption Translation from European Languages into Arabic
by Roweida Mohammed, Inad Aljarrah, Mahmoud Al-Ayyoub and Ali Fadel
Computation 2025, 13(8), 194; https://doi.org/10.3390/computation13080194 - 8 Aug 2025
Viewed by 594
Abstract
Neural machine translation (NMT) models combining textual and visual inputs generate more accurate translations compared with unimodal models. Moreover, translation models with an under-resourced target language benefit from multisource inputs (source sentences are provided in different languages). Building MultiModal MutliSource NMT (M3 [...] Read more.
Neural machine translation (NMT) models combining textual and visual inputs generate more accurate translations compared with unimodal models. Moreover, translation models with an under-resourced target language benefit from multisource inputs (source sentences are provided in different languages). Building MultiModal MutliSource NMT (M3S-NMT) systems require significant efforts to curate datasets suitable for such a multifaceted task. This work uses image caption translation as an example of multimodal translation and presents a novel public dataset for translating captions from multiple European languages (viz., English, German, French, and Czech) into the distant and under-resourced Arabic language. Moreover, it presents multitask learning models trained and tested on this dataset to serve as solid baselines to help further research in this area. These models involve two parts: one for learning the visual representations of the input images, and the other for translating the textual input based on these representations. The translations are produced from a framework of attention-based encoder–decoder architectures. The visual features are learned from a pretrained convolutional neural network (CNN). These features are then integrated with textual features learned through the very basic yet well-known recurrent neural networks (RNNs) with GloVe or BERT word embeddings. Despite the challenges associated with the task at hand, the results of these systems are very promising, reaching 34.57 and 42.52 METEOR scores. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

17 pages, 550 KB  
Article
Modeling Strategies for Conducting Wave Surveillance Using a Swarm of Security Drones
by Oleg Fedorovich, Mikhail Lukhanin, Dmytro Krytskyi and Oleksandr Prokhorov
Computation 2025, 13(8), 193; https://doi.org/10.3390/computation13080193 - 8 Aug 2025
Viewed by 444
Abstract
This work formulates and solves the actual problem of studying the logistics of unmanned aerial vehicle (UAV) operations in facility security planning. The study is related to security tasks, including perimeter control, infrastructure condition monitoring, prevention of unauthorized access, and analysis of potential [...] Read more.
This work formulates and solves the actual problem of studying the logistics of unmanned aerial vehicle (UAV) operations in facility security planning. The study is related to security tasks, including perimeter control, infrastructure condition monitoring, prevention of unauthorized access, and analysis of potential threats. Thus, the topic of the proposed publication is relevant as it examines the sequence of logistical actions in the large-scale application of a swarm of drones for facility protection. The purpose of the research is to create a set of mathematical and simulation models that can be used to analyze the capabilities of a drone swarm when organizing security measures. The article analyzes modern problems of using a drone swarm: formation of the swarm, assessment of its potential capabilities, organization of patrols, development of monitoring scenarios, planning of drone routes and assessment of the effectiveness of the security system. Special attention is paid to the possibilities of wave patrols to provide continuous surveillance of the object. In order to form a drone swarm and possibly divide it into groups sent to different surveillance zones, the necessary UAV capacity to effectively perform security tasks is assessed. Possible security scenarios using drone waves are developed as follows: single patrolling with limited resources; two-wave patrolling; and multi-stage patrolling for complete coverage of the protected area with the required number of UAVs. To select priority monitoring areas, the functional potential of drones and current risks are taken into account. An optimization model of rational distribution of drones into groups to ensure effective control of the protected area is created. Possible variants of drone group formation are analyzed as follows: allocation of one priority surveillance zone, formation of a set of key zones, or even distribution of swarm resources along the entire perimeter. Possible scenarios for dividing the drone swarm in flight are developed as follows: dividing the swarm into groups at the launch stage, dividing the swarm at a given navigation point on the route, and repeatedly dividing the swarm at different patrol points. An original algorithm for the formation of drone flight routes for object surveillance based on the simulation modeling of the movement of virtual objects simulating drones has been developed. An agent-based model on the AnyLogic platform was created to study the logistics of security operations. The scientific novelty of the study is related to the actual task of forming possible strategies for using a swarm of drones to provide integrated security of objects, which contributes to improving the efficiency of security and monitoring systems. The results of the study can be used by specialists in security, logistics, infrastructure monitoring and other areas related to the use of drone swarms for effective control and protection of facilities. Full article
Show Figures

Figure 1

20 pages, 2854 KB  
Article
Features of Three-Dimensional Calculation of Gas Coolers of Turbogenerators
by Oleksii Tretiak, Mariia Arefieva, Dmytro Krytskyi, Stanislav Kravchenko, Bogdan Shestak, Serhii Smakhtin, Anton Kovryga and Serhii Serhiienko
Computation 2025, 13(8), 192; https://doi.org/10.3390/computation13080192 - 8 Aug 2025
Viewed by 357
Abstract
Gas coolers are critical elements of turbogenerator cooling systems, which ensure the reliability and stability of the thermal mode of high-power electric machines. The aim of this research is to improve the accuracy of thermal calculations of gas coolers by combining analytical methods [...] Read more.
Gas coolers are critical elements of turbogenerator cooling systems, which ensure the reliability and stability of the thermal mode of high-power electric machines. The aim of this research is to improve the accuracy of thermal calculations of gas coolers by combining analytical methods with numerical CFD-modeling (Computation Fluid Dynamics). The cooler’s total cooling capacity is approximately 3.8 MW, distributed across three identical sections.An analytical calculation of heat transfer for a hydrogen-water gas cooler with finned tubes was performed, using classical dependencies to determine the heat transfer coefficients and pressure losses. The results were verified using three-dimensional CFD-modeling of the hydrogen flow through the cooler using the standard k-ε (k-epsilon) turbulence model. The discrepancy between the results of analytical and numerical calculations is less than 10%. The temperature of the cooled hydrogen at the outlet meets the design requirements (+40 °C); however, areas of uneven temperature distribution were identified that require further design optimization. The study introduces, for the first time, a combined approach using analytical calculations and CFD by thoroughly evaluating the heat exchange between the cooling tube fins and hydrogen. This scientific solution enabled the simulation of hydrogen flow within the multi-stage cooler system. The proposed method has proven to be reliable and can be applied both at the design stage and for the analysis of upgraded cooling systems of turbogenerators. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

24 pages, 5391 KB  
Article
Advanced Linearization Methods for Efficient and Accurate Compositional Reservoir Simulations
by Ali Asif, Abdul Salam Abd and Ahmad Abushaikha
Computation 2025, 13(8), 191; https://doi.org/10.3390/computation13080191 - 8 Aug 2025
Viewed by 1587
Abstract
Efficient simulation of multiphase, multicomponent fluid flow in heterogeneous reservoirs is critical for optimizing hydrocarbon recovery. In this study, we investigate advanced linearization techniques for fully implicit compositional reservoir simulations, a problem characterized by highly nonlinear governing equations that challenge both accuracy and [...] Read more.
Efficient simulation of multiphase, multicomponent fluid flow in heterogeneous reservoirs is critical for optimizing hydrocarbon recovery. In this study, we investigate advanced linearization techniques for fully implicit compositional reservoir simulations, a problem characterized by highly nonlinear governing equations that challenge both accuracy and computational efficiency. We implement four methods—finite backward difference (FDB), finite central difference (FDC), operator-based linearization (OBL), and residual accelerated Jacobian (RAJ)—within an MPI-based parallel framework and benchmark their performance against a legacy simulator across three test cases: (i) a five-component hydrocarbon gas field with CO2 injection, (ii) a ten-component gas field with CO2 injection, and (iii) a ten-component gas field case without injection. Key quantitative findings include: in the five-component case, OBL achieved convergence with only 770 nonlinear iterations (compared to 841–843 for other methods) and reduced operator computation time to 9.6 of total simulation time, highlighting its speed for simpler systems; in contrast, for the more complex ten-component injection, FDB proved most robust with 706 nonlinear iterations versus 723 for RAJ, while OBL failed to converge; in noninjection scenarios, RAJ effectively captured nonlinear dynamics with comparable iteration counts but lower overall computational expense. These results demonstrate that the optimal linearization strategy is context-dependent—OBL is advantageous for simpler problems requiring rapid solutions, whereas FDB and RAJ are preferable for complex systems demanding higher accuracy. The novelty of this work lies in integrating these advanced linearization schemes into a scalable, parallel simulation framework and providing a comprehensive, quantitative comparison that extends beyond previous efforts in reservoir simulation literature. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

15 pages, 3066 KB  
Article
Adaptive Working Set Model for Memory Management and Epidemic Control: A Unified Approach
by Gaukhar Borankulova, Aslanbek Murzakhmetov, Aigul Tungatarova and Zhazira Taszhurekova
Computation 2025, 13(8), 190; https://doi.org/10.3390/computation13080190 - 7 Aug 2025
Viewed by 383
Abstract
The Working Set concept, originally introduced by P. Denning for memory management, defines a dynamic subset of system elements actively in use. Designed to reduce page faults and prevent thrashing, it has proven effective in optimizing memory performance. This study explores the interdisciplinary [...] Read more.
The Working Set concept, originally introduced by P. Denning for memory management, defines a dynamic subset of system elements actively in use. Designed to reduce page faults and prevent thrashing, it has proven effective in optimizing memory performance. This study explores the interdisciplinary potential of the Working Set by applying it to two distinct domains: virtual memory systems and epidemiological modeling. We demonstrate that focusing on the active subset of a system enables optimization in both contexts—minimizing page faults and containing epidemics via dynamic isolation. The effectiveness of this approach is validated through memory access simulations and agent-based epidemic modeling. The advantages of using the Working Set as a general framework for describing the behavior of dynamic systems are discussed, along with its applicability across a wide range of scientific and engineering problems. Full article
Show Figures

Figure 1

20 pages, 4789 KB  
Article
A Computerized Analysis of Flow Parameters for a Twin-Screw Compressor Using SolidWorks Flow Simulation
by Ildiko Brinas, Florin Dumitru Popescu, Andrei Andras, Sorin Mihai Radu and Laura Cojanu
Computation 2025, 13(8), 189; https://doi.org/10.3390/computation13080189 - 6 Aug 2025
Viewed by 504
Abstract
Twin-screw compressors (TSCs) are widely used in various industries. Their performance is influenced by several parameters, such as rotor profiles, clearance gaps, operating speed, and thermal effects. Traditionally, optimizing these parameters relied on experimental methods, which are costly and time-consuming. However, advancements in [...] Read more.
Twin-screw compressors (TSCs) are widely used in various industries. Their performance is influenced by several parameters, such as rotor profiles, clearance gaps, operating speed, and thermal effects. Traditionally, optimizing these parameters relied on experimental methods, which are costly and time-consuming. However, advancements in computational tools, such as Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA), have revolutionized compressor analysis. This study presents a CFD analysis of a specific model of a TSC in a 5 male/6 female lobe configuration using the SolidWorks Flow Simulation environment—an approach not traditionally applied to such positive displacement machines. The results visually present internal flow trajectories, fluid velocities, pressure distributions, temperature gradients, and leakage behaviors with high spatial and temporal resolution. Additionally, torque fluctuations and isosurface visualizations revealed insights into mechanical loads and flow behavior. The proposed method allows for relatively easy adaptation to different TSC configurations and can also be a useful tool for engineering and educational purposes. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

21 pages, 4968 KB  
Article
EQResNet: Real-Time Simulation and Resilience Assessment of Post-Earthquake Emergency Highway Transportation Networks
by Zhenliang Liu and Chuxuan Guo
Computation 2025, 13(8), 188; https://doi.org/10.3390/computation13080188 - 6 Aug 2025
Viewed by 459
Abstract
Multiple uncertainties in traffic demand fluctuations and infrastructure vulnerability during seismic events pose significant challenges for the resilience assessment of highway transportation networks (HTNs). While Monte Carlo simulation remains the dominant approach for uncertainty propagation, its high computational cost limits its scalability, particularly [...] Read more.
Multiple uncertainties in traffic demand fluctuations and infrastructure vulnerability during seismic events pose significant challenges for the resilience assessment of highway transportation networks (HTNs). While Monte Carlo simulation remains the dominant approach for uncertainty propagation, its high computational cost limits its scalability, particularly in metropolitan-scale networks. This study proposes an EQResNet framework for accelerated post-earthquake resilience assessment of HTNs. The model integrates network topology, interregional traffic demand, and roadway characteristics into a streamlined deep neural network architecture. A comprehensive surrogate modeling strategy is developed to replace conventional traffic simulation modules, including highway status realization, shortest path computation, and traffic flow assignment. Combined with seismic fragility models and recovery functions for regional bridges, the framework captures the dynamic evolution of HTN functionality following seismic events. A multi-dimensional resilience evaluation system is also established to quantify network performance from emergency response and recovery perspectives. A case study on the Sioux Falls network under probabilistic earthquake scenarios demonstrates the effectiveness of the proposed method, achieving 95% prediction accuracy while reducing computational time by 90% compared to traditional numerical simulations. The results highlight the framework’s potential as a scalable, efficient, and reliable tool for large-scale post-disaster transportation system analysis. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 1216 KB  
Article
Mathematical Modeling of Regional Infectious Disease Dynamics Based on Extended Compartmental Models
by Olena Kiseleva, Sergiy Yakovlev, Olga Prytomanova and Oleksandr Kuzenkov
Computation 2025, 13(8), 187; https://doi.org/10.3390/computation13080187 - 4 Aug 2025
Viewed by 844
Abstract
This study presents an extended approach to compartmental modeling of infectious disease spread, focusing on regional heterogeneity within affected areas. Using classical SIS, SIR, and SEIR frameworks, we simulate the dynamics of COVID-19 across two major regions of Ukraine—Dnipropetrovsk and Kharkiv—during the period [...] Read more.
This study presents an extended approach to compartmental modeling of infectious disease spread, focusing on regional heterogeneity within affected areas. Using classical SIS, SIR, and SEIR frameworks, we simulate the dynamics of COVID-19 across two major regions of Ukraine—Dnipropetrovsk and Kharkiv—during the period 2020–2024. The proposed mathematical model incorporates regionally distributed subpopulations and applies a system of differential equations solved using the classical fourth-order Runge–Kutta method. The simulations are validated against real-world epidemiological data from national and international sources. The SEIR model demonstrated superior performance, achieving maximum relative errors of 4.81% and 5.60% in the Kharkiv and Dnipropetrovsk regions, respectively, outperforming the SIS and SIR models. Despite limited mobility and social contact data, the regionally adapted models achieved acceptable accuracy for medium-term forecasting. This validates the practical applicability of extended compartmental models in public health planning, particularly in settings with constrained data availability. The results further support the use of these models for estimating critical epidemiological indicators such as infection peaks and hospital resource demands. The proposed framework offers a scalable and computationally efficient tool for regional epidemic forecasting, with potential applications to future outbreaks in geographically heterogeneous environments. Full article
Show Figures

Figure 1

22 pages, 409 KB  
Article
Employing Machine Learning and Deep Learning Models for Mental Illness Detection
by Yeyubei Zhang, Zhongyan Wang, Zhanyi Ding, Yexin Tian, Jianglai Dai, Xiaorui Shen, Yunchong Liu and Yuchen Cao
Computation 2025, 13(8), 186; https://doi.org/10.3390/computation13080186 - 4 Aug 2025
Viewed by 742
Abstract
Social media platforms have emerged as valuable sources for mental health research, enabling the detection of conditions such as depression through analyses of user-generated posts. This manuscript offers practical, step-by-step guidance for applying machine learning and deep learning methods to mental health detection [...] Read more.
Social media platforms have emerged as valuable sources for mental health research, enabling the detection of conditions such as depression through analyses of user-generated posts. This manuscript offers practical, step-by-step guidance for applying machine learning and deep learning methods to mental health detection on social media. Key topics include strategies for handling heterogeneous and imbalanced datasets, advanced text preprocessing, robust model evaluation, and the use of appropriate metrics beyond accuracy. Real-world examples illustrate each stage of the process, and an emphasis is placed on transparency, reproducibility, and ethical best practices. While the present work focuses on text-based analysis, we discuss the limitations of this approach—including label inconsistency and a lack of clinical validation—and highlight the need for future research to integrate multimodal signals and gold-standard psychometric assessments. By sharing these frameworks and lessons, this manuscript aims to support the development of more reliable, generalizable, and ethically responsible models for mental health detection and early intervention. Full article
Show Figures

Figure 1

24 pages, 1681 KB  
Article
A Hybrid Quantum–Classical Architecture with Data Re-Uploading and Genetic Algorithm Optimization for Enhanced Image Classification
by Aksultan Mukhanbet and Beimbet Daribayev
Computation 2025, 13(8), 185; https://doi.org/10.3390/computation13080185 - 1 Aug 2025
Viewed by 1055
Abstract
Quantum machine learning (QML) has emerged as a promising approach for enhancing image classification by exploiting quantum computational principles such as superposition and entanglement. However, practical applications on complex datasets like CIFAR-100 remain limited due to the low expressivity of shallow circuits and [...] Read more.
Quantum machine learning (QML) has emerged as a promising approach for enhancing image classification by exploiting quantum computational principles such as superposition and entanglement. However, practical applications on complex datasets like CIFAR-100 remain limited due to the low expressivity of shallow circuits and challenges in circuit optimization. In this study, we propose HQCNN–REGA—a novel hybrid quantum–classical convolutional neural network architecture that integrates data re-uploading and genetic algorithm optimization for improved performance. The data re-uploading mechanism allows classical inputs to be encoded multiple times into quantum states, enhancing the model’s capacity to learn complex visual features. In parallel, a genetic algorithm is employed to evolve the quantum circuit architecture by optimizing gate sequences, entanglement patterns, and layer configurations. This combination enables automatic discovery of efficient parameterized quantum circuits without manual tuning. Experiments on the MNIST and CIFAR-100 datasets demonstrate state-of-the-art performance for quantum models, with HQCNN–REGA outperforming existing quantum neural networks and approaching the accuracy of advanced classical architectures. In particular, we compare our model with classical convolutional baselines such as ResNet-18 to validate its effectiveness in real-world image classification tasks. Our results demonstrate the feasibility of scalable, high-performing quantum–classical systems and offer a viable path toward practical deployment of QML in computer vision applications, especially on noisy intermediate-scale quantum (NISQ) hardware. Full article
Show Figures

Figure 1

27 pages, 1081 KB  
Article
Effect of Monomer Mixture Composition on TiCl4-Al(i-C4H9)3 Catalytic System Activity in Butadiene–Isoprene Copolymerization: A Theoretical Study
by Konstantin A. Tereshchenko, Rustem T. Ismagilov, Nikolai V. Ulitin, Yana L. Lyulinskaya and Alexander S. Novikov
Computation 2025, 13(8), 184; https://doi.org/10.3390/computation13080184 - 1 Aug 2025
Viewed by 353
Abstract
Divinylisoprene rubber, a copolymer of butadiene and isoprene, is used as raw material for rubber technical products, combining isoprene rubber’s elasticity and butadiene rubber’s wear resistance. These properties depend quantitatively on the copolymer composition, which depends on the kinetics of its synthesis. This [...] Read more.
Divinylisoprene rubber, a copolymer of butadiene and isoprene, is used as raw material for rubber technical products, combining isoprene rubber’s elasticity and butadiene rubber’s wear resistance. These properties depend quantitatively on the copolymer composition, which depends on the kinetics of its synthesis. This work aims to theoretically describe how the monomer mixture composition in the butadiene–isoprene copolymerization affects the activity of the TiCl4-Al(i-C4H9)3 catalytic system (expressed by active sites concentration) via kinetic modeling. This enables development of a reliable kinetic model for divinylisoprene rubber synthesis, predicting reaction rate, molecular weight, and composition, applicable to reactor design and process intensification. Active sites concentrations were calculated from experimental copolymerization rates and known chain propagation constants for various monomer compositions. Kinetic equations for active sites formation were based on mass-action law and Langmuir monomolecular adsorption theory. An analytical equation relating active sites concentration to monomer composition was derived, analyzed, and optimized with experimental data. The results show that monomer composition’s influence on active sites concentration is well described by a two-step kinetic model (physical adsorption followed by Ti–C bond formation), accounting for competitive adsorption: isoprene adsorbs more readily, while butadiene forms more stable active sites. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Figure 1

14 pages, 3219 KB  
Article
Research on the Branch Road Traffic Flow Estimation and Main Road Traffic Flow Monitoring Optimization Problem
by Bingxian Wang and Sunxiang Zhu
Computation 2025, 13(8), 183; https://doi.org/10.3390/computation13080183 - 1 Aug 2025
Viewed by 431
Abstract
Main roads are usually equipped with traffic flow monitoring devices in the road network to record the traffic flow data of the main roads in real time. Three complex scenarios, i.e., Y-junctions, multi-lane merging, and signalized intersections, are considered in this paper by [...] Read more.
Main roads are usually equipped with traffic flow monitoring devices in the road network to record the traffic flow data of the main roads in real time. Three complex scenarios, i.e., Y-junctions, multi-lane merging, and signalized intersections, are considered in this paper by developing a novel modeling system that leverages only historical main-road data to reconstruct branch-road volumes and identify pivotal time points where instantaneous observations enable robust inference of period-aggregate traffic volumes. Four mathematical models (I–IV) are built using the data given in appendix, with performance quantified via error metrics (RMSE, MAE, MAPE) and stability indices (perturbation sensitivity index, structure similarity score). Finally, the significant traffic flow change points are further identified by the PELT algorithm. Full article
Show Figures

Figure 1

16 pages, 3006 KB  
Article
A New Type of High-Order Mapped Unequal-Sized WENO Scheme for Nonlinear Degenerate Parabolic Equations
by Zhengwei Hou and Liang Li
Computation 2025, 13(8), 182; https://doi.org/10.3390/computation13080182 - 1 Aug 2025
Viewed by 230
Abstract
In this paper, we propose the MUSWENO scheme, a novel mapped weighted essentially non-oscillatory (WENO) method that employs unequal-sized stencils, for solving nonlinear degenerate parabolic equations. The new mapping function and nonlinear weights are proposed to reduce the difference between the linear weights [...] Read more.
In this paper, we propose the MUSWENO scheme, a novel mapped weighted essentially non-oscillatory (WENO) method that employs unequal-sized stencils, for solving nonlinear degenerate parabolic equations. The new mapping function and nonlinear weights are proposed to reduce the difference between the linear weights and nonlinear weights. Smaller numerical errors and fifth-order accuracy are obtained. Compared with traditional WENO schemes, this new scheme offers the advantage that linear weights can be any positive numbers on the condition that their summation is one, eliminating the need to handle cases with negative linear weights. Another advantage is that we can reconstruct a polynomial over the large stencil, while many classical high-order WENO reconstructions only reconstruct the values at the boundary points or discrete quadrature points. Extensive examples have verified the good representations of this scheme. Full article
Show Figures

Figure 1

17 pages, 2522 KB  
Article
Organization of the Optimal Shift Start in an Automotive Environment
by Gábor Lakatos, Bence Zoltán Vámos, István Aupek and Mátyás Andó
Computation 2025, 13(8), 181; https://doi.org/10.3390/computation13080181 - 1 Aug 2025
Viewed by 369
Abstract
Shift organizations in automotive manufacturing often rely on manual task allocation, resulting in inefficiencies, human error, and increased workload for supervisors. This research introduces an automated solution using the Kuhn-Munkres algorithm, integrated with the Moodle learning management system, to optimize task assignments based [...] Read more.
Shift organizations in automotive manufacturing often rely on manual task allocation, resulting in inefficiencies, human error, and increased workload for supervisors. This research introduces an automated solution using the Kuhn-Munkres algorithm, integrated with the Moodle learning management system, to optimize task assignments based on operator qualifications and task complexity. Simulations conducted with real industrial data demonstrate that the proposed method meets operational requirements, both logically and mathematically. The system improves the start of shifts by assigning simpler tasks initially, enhancing operator confidence and reducing the need for assistance. It also ensures that task assignments align with required training levels, improving quality and process reliability. For industrial practitioners, the approach provides a practical tool to reduce planning time, human error, and supervisory burden, while increasing shift productivity. From an academic perspective, the study contributes to applied operations research and workforce optimization, offering a replicable model grounded in real-world applications. The integration of algorithmic task allocation with training systems enables a more accurate matching of workforce capabilities to production demands. This study aims to support data-driven decision-making in shift management, with the potential to enhance operational efficiency and encourage timely start of work, thereby possibly contributing to smoother production flow and improved organizational performance. Full article
(This article belongs to the Special Issue Computational Approaches for Manufacturing)
Show Figures

Figure 1

17 pages, 2016 KB  
Article
DFT-Guided Next-Generation Na-Ion Batteries Powered by Halogen-Tuned C12 Nanorings
by Riaz Muhammad, Anam Gulzar, Naveen Kosar and Tariq Mahmood
Computation 2025, 13(8), 180; https://doi.org/10.3390/computation13080180 - 1 Aug 2025
Viewed by 619
Abstract
Recent research on the design and synthesis of new and upgraded materials for secondary batteries is growing to fulfill future energy demands around the globe. Herein, by using DFT calculations, the thermodynamic and electrochemical properties of Na/Na+@C12 complexes and then [...] Read more.
Recent research on the design and synthesis of new and upgraded materials for secondary batteries is growing to fulfill future energy demands around the globe. Herein, by using DFT calculations, the thermodynamic and electrochemical properties of Na/Na+@C12 complexes and then halogens (X = Br, Cl, and F) as counter anions are studied for the enhancement of Na-ion battery cell voltage and overall performance. Isolated C12 nanorings showed a lower cell voltage (−1.32 V), which was significantly increased after adsorption with halide anions as counter anions. Adsorption of halides increased the Gibbs free energy, which in turn resulted in higher cell voltage. Cell voltage increased with the increasing electronegativity of the halide anion. The Gibbs free energy of Br@C12 was −52.36 kcal·mol1, corresponding to a desirable cell voltage of 2.27 V, making it suitable for use as an anode in sodium-ion batteries. The estimated cell voltage of these considered complexes ensures the effective use of these complexes in sodium-ion secondary batteries. Full article
(This article belongs to the Special Issue Feature Papers in Computational Chemistry)
Show Figures

Figure 1

19 pages, 2547 KB  
Article
Artificial Intelligence Optimization of Polyaluminum Chloride (PAC) Dosage in Drinking Water Treatment: A Hybrid Genetic Algorithm–Neural Network Approach
by Darío Fernando Guamán-Lozada, Lenin Santiago Orozco Cantos, Guido Patricio Santillán Lima and Fabian Arias Arias
Computation 2025, 13(8), 179; https://doi.org/10.3390/computation13080179 - 1 Aug 2025
Viewed by 806
Abstract
The accurate dosing of polyaluminum chloride (PAC) is essential for achieving effective coagulation in drinking water treatment, yet conventional methods such as jar tests are limited in their responsiveness and operational efficiency. This study proposes a hybrid modeling framework that integrates artificial neural [...] Read more.
The accurate dosing of polyaluminum chloride (PAC) is essential for achieving effective coagulation in drinking water treatment, yet conventional methods such as jar tests are limited in their responsiveness and operational efficiency. This study proposes a hybrid modeling framework that integrates artificial neural networks (ANN) with genetic algorithms (GA) to optimize PAC dosage under variable raw water conditions. Operational data from 400 jar test experiments, collected between 2022 and 2024 at the Yanahurco water treatment plant (Ecuador), were used to train an ANN model capable of predicting six post-treatment water quality indicators, including turbidity, color, and pH. The ANN achieved excellent predictive accuracy (R2 > 0.95 for turbidity and color), supporting its use as a surrogate model within a GA-based optimization scheme. The genetic algorithm evaluated dosage strategies by minimizing treatment costs while enforcing compliance with national water quality standards. The results revealed a bimodal dosing pattern, favoring low PAC dosages (~4 ppm) during routine conditions and higher dosages (~12 ppm) when influent quality declined. Optimization yielded a 49% reduction in median chemical costs and improved color compliance from 52% to 63%, while maintaining pH compliance above 97%. Turbidity remained a challenge under some conditions, indicating the potential benefit of complementary coagulants. The proposed ANN–GA approach offers a scalable and adaptive solution for enhancing chemical dosing efficiency in water treatment operations. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

23 pages, 3301 KB  
Article
An Image-Based Water Turbidity Classification Scheme Using a Convolutional Neural Network
by Itzel Luviano Soto, Yajaira Concha-Sánchez and Alfredo Raya
Computation 2025, 13(8), 178; https://doi.org/10.3390/computation13080178 - 23 Jul 2025
Viewed by 712
Abstract
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and [...] Read more.
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

23 pages, 3741 KB  
Article
Multi-Corpus Benchmarking of CNN and LSTM Models for Speaker Gender and Age Profiling
by Jorge Jorrin-Coz, Mariko Nakano, Hector Perez-Meana and Leobardo Hernandez-Gonzalez
Computation 2025, 13(8), 177; https://doi.org/10.3390/computation13080177 - 23 Jul 2025
Viewed by 577
Abstract
Speaker profiling systems are often evaluated on a single corpus, which complicates reliable comparison. We present a fully reproducible evaluation pipeline that trains Convolutional Neural Networks (CNNs) and Long-Short Term Memory (LSTM) models independently on three speech corpora representing distinct recording conditions—studio-quality TIMIT, [...] Read more.
Speaker profiling systems are often evaluated on a single corpus, which complicates reliable comparison. We present a fully reproducible evaluation pipeline that trains Convolutional Neural Networks (CNNs) and Long-Short Term Memory (LSTM) models independently on three speech corpora representing distinct recording conditions—studio-quality TIMIT, crowdsourced Mozilla Common Voice, and in-the-wild VoxCeleb1. All models share the same architecture, optimizer, and data preprocessing; no corpus-specific hyperparameter tuning is applied. We perform a detailed preprocessing and feature extraction procedure, evaluating multiple configurations and validating their applicability and effectiveness in improving the obtained results. A feature analysis shows that Mel spectrograms benefit CNNs, whereas Mel Frequency Cepstral Coefficients (MFCCs) suit LSTMs, and that the optimal Mel-bin count grows with corpus Signal Noise Rate (SNR). With this fixed recipe, EfficientNet achieves 99.82% gender accuracy on Common Voice (+1.25 pp over the previous best) and 98.86% on VoxCeleb1 (+0.57 pp). MobileNet attains 99.86% age-group accuracy on Common Voice (+2.86 pp) and a 5.35-year MAE for age estimation on TIMIT using a lightweight configuration. The consistent, near-state-of-the-art results across three acoustically diverse datasets substantiate the robustness and versatility of the proposed pipeline. Code and pre-trained weights are released to facilitate downstream research. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

33 pages, 2542 KB  
Article
Trapped Modes Along Periodic Structures Submerged in a Three-Layer Fluid with a Background Steady Flow
by Gonçalo A. S. Dias and Bruno M. M. Pereira
Computation 2025, 13(8), 176; https://doi.org/10.3390/computation13080176 - 22 Jul 2025
Viewed by 246
Abstract
In this study, we study the trapping of linear water waves by infinite arrays of three-dimensional fixed periodic structures in a three-layer fluid. Each layer has an independent uniform velocity field with respect to the fixed ground in addition to the internal modes [...] Read more.
In this study, we study the trapping of linear water waves by infinite arrays of three-dimensional fixed periodic structures in a three-layer fluid. Each layer has an independent uniform velocity field with respect to the fixed ground in addition to the internal modes along the interfaces between layers. Dynamical stability between velocity shear and gravitational pull constrains the layer velocities to a neighbourhood of the diagonal U1=U2=U3 in velocity space. A non-linear spectral problem results from the variational formulation. This problem can be linearized, resulting in a geometric condition (from energy minimization) that ensures the existence of trapped modes within the limits set by stability. These modes are solutions living the discrete spectrum that do not radiate energy to infinity. Symmetries reduce the global problem to solutions in the first octant of the three-dimensional velocity space. Examples are shown of configurations of obstacles which satisfy the stability and geometric conditions, depending on the values of the layer velocities. The robustness of the result of the vertical column from previous studies is confirmed in the new configurations. This allows for comparison principles (Cavalieri’s principle, etc.) to be used in determining whether trapped modes are generated. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

Previous Issue
Back to TopTop