Previous Issue
Volume 13, March
 
 

Computation, Volume 13, Issue 4 (April 2025) – 20 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
36 pages, 2380 KiB  
Article
Enhanced Efficient 3D Poisson Solver Supporting Dirichlet, Neumann, and Periodic Boundary Conditions
by Chieh-Hsun Wu
Computation 2025, 13(4), 99; https://doi.org/10.3390/computation13040099 (registering DOI) - 18 Apr 2025
Abstract
This paper generalizes the efficient matrix decomposition method for solving the finite-difference (FD) discretized three-dimensional (3D) Poisson’s equation using symmetric 27-point, 4th-order accurate stencils to adapt more boundary conditions (BCs), i.e., Dirichlet, Neumann, and Periodic BCs. It employs equivalent Dirichlet nodes to streamline [...] Read more.
This paper generalizes the efficient matrix decomposition method for solving the finite-difference (FD) discretized three-dimensional (3D) Poisson’s equation using symmetric 27-point, 4th-order accurate stencils to adapt more boundary conditions (BCs), i.e., Dirichlet, Neumann, and Periodic BCs. It employs equivalent Dirichlet nodes to streamline source term computation due to BCs. A generalized eigenvalue formulation is presented to accommodate the flexible 4th-order stencil weights. The proposed method significantly enhances computational speed by reducing the 3D problem to a set of independent 1D problems. As compared to the typical matrix inversion technique, it results in a speed-up ratio proportional to , where is the number of nodes along one side of the cubic domain. Accuracy is validated using Gaussian and sinusoidal source fields, showing 4th-order convergence for Dirichlet and Periodic boundaries, and 2nd-order convergence for Neumann boundaries due to extrapolation limitations—though with lower errors than traditional 2nd-order schemes. The method is also applied to vortex-in-cell flow simulations, demonstrating its capability to handle outer boundaries efficiently and its compatibility with immersed boundary techniques for internal solid obstacles. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
28 pages, 2783 KiB  
Article
Blockchain-Enhanced Security for 5G Edge Computing in IoT
by Manuel J. C. S. Reis
Computation 2025, 13(4), 98; https://doi.org/10.3390/computation13040098 - 18 Apr 2025
Abstract
The rapid expansion of 5G networks and edge computing has amplified security challenges in Internet of Things (IoT) environments, including unauthorized access, data tampering, and DDoS attacks. This paper introduces EdgeChainGuard, a hybrid blockchain-based authentication framework designed to secure 5G-enabled IoT systems through [...] Read more.
The rapid expansion of 5G networks and edge computing has amplified security challenges in Internet of Things (IoT) environments, including unauthorized access, data tampering, and DDoS attacks. This paper introduces EdgeChainGuard, a hybrid blockchain-based authentication framework designed to secure 5G-enabled IoT systems through decentralized identity management, smart contract-based access control, and AI-driven anomaly detection. By combining permissioned and permissionless blockchain layers with Layer-2 scaling solutions and adaptive consensus mechanisms, the framework enhances both security and scalability while maintaining computational efficiency. Using synthetic datasets that simulate real-world adversarial behaviour, our evaluation shows an average authentication latency of 172.50 s and a 50% reduction in gas fees compared to traditional Ethereum-based implementations. The results demonstrate that EdgeChainGuard effectively enforces tamper-resistant authentication, reduces unauthorized access, and adapts to dynamic network conditions. Future research will focus on integrating zero-knowledge proofs (ZKPs) for privacy preservation, federated learning for decentralized AI retraining, and lightweight anomaly detection models to enable secure, low-latency authentication in resource-constrained IoT deployments. Full article
Show Figures

Figure 1

9 pages, 243 KiB  
Communication
Pareto Efficiency in Euclidean Spaces and Its Applications in Economics
by Christos Kountzakis and Vasileia Tsachouridou-Papadatou
Computation 2025, 13(4), 97; https://doi.org/10.3390/computation13040097 - 14 Apr 2025
Viewed by 38
Abstract
The aim of the first part of this paper is to show whether a set of Proper Efficient Points and a set of Pareto Efficient Points coincide in Euclidean spaces. In the second part of the paper, we show that supporting prices, which [...] Read more.
The aim of the first part of this paper is to show whether a set of Proper Efficient Points and a set of Pareto Efficient Points coincide in Euclidean spaces. In the second part of the paper, we show that supporting prices, which are actually strictly positive, do exist for a large class of exchange economies. A consequence of this result is a generalized form of the Second Welfare theorem. The properties of the cones’ bases are significant for this purpose. Full article
21 pages, 2589 KiB  
Article
Deep Learning-Based Short Text Summarization: An Integrated BERT and Transformer Encoder–Decoder Approach
by Fahd A. Ghanem, M. C. Padma, Hudhaifa M. Abdulwahab and Ramez Alkhatib
Computation 2025, 13(4), 96; https://doi.org/10.3390/computation13040096 - 12 Apr 2025
Viewed by 93
Abstract
The field of text summarization has evolved from basic extractive methods that identify key sentences to sophisticated abstractive techniques that generate contextually meaningful summaries. In today’s digital landscape, where an immense volume of textual data is produced every day, the need for concise [...] Read more.
The field of text summarization has evolved from basic extractive methods that identify key sentences to sophisticated abstractive techniques that generate contextually meaningful summaries. In today’s digital landscape, where an immense volume of textual data is produced every day, the need for concise and coherent summaries is more crucial than ever. However, summarizing short texts, particularly from platforms like Twitter, presents unique challenges due to character constraints, informal language, and noise from elements such as hashtags, mentions, and URLs. To overcome these challenges, this paper introduces a deep learning framework for automated short text summarization on Twitter. The proposed approach combines bidirectional encoder representations from transformers (BERT) with a transformer-based encoder–decoder architecture (TEDA), incorporating an attention mechanism to improve contextual understanding. Additionally, long short-term memory (LSTM) networks are integrated within BERT to effectively capture long-range dependencies in tweets and their summaries. This hybrid model ensures that generated summaries remain informative, concise, and contextually relevant while minimizing redundancy. The performance of the proposed framework was assessed using three benchmark Twitter datasets—Hagupit, SHShoot, and Hyderabad Blast—with ROUGE scores serving as the evaluation metric. Experimental results demonstrate that the model surpasses existing approaches in accurately capturing key information from tweets. These findings underscore the framework’s effectiveness in automated short text summarization, offering a robust solution for efficiently processing and summarizing large-scale social media content. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

14 pages, 2520 KiB  
Article
Non-Iterative Recovery Information Procedure with Database Inspired in Hopfield Neural Networks
by Cesar U. Solis, Jorge Morales and Carlos M. Montelongo
Computation 2025, 13(4), 95; https://doi.org/10.3390/computation13040095 - 10 Apr 2025
Viewed by 82
Abstract
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the [...] Read more.
This work establishes a simple algorithm to recover an information vector from a predefined database available every time. It is considered that the information analyzed may be incomplete, damaged, or corrupted. This algorithm is inspired by Hopfield Neural Networks (HNN), which allows the recursive reconstruction of an information vector through an energy-minimizing optimal process, but this paper presents a procedure that generates results in a single iteration. Images have been chosen for the information recovery application to build the vector information. In addition, a filter is added to the algorithm to focus on the most important information when reconstructing data, allowing it to work with damaged or incomplete vectors, even without losing the ability to be a non-iterative process. A brief theoretical introduction and a numerical validation for recovery information are shown with an example of a database containing 40 images. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

26 pages, 11071 KiB  
Article
Fault Diagnosis in Analog Circuits Using a Multi-Input Convolutional Neural Network with Feature Attention
by Hui Yuan, Yaoke Shi, Long Li, Guobi Ling, Jingxiao Zeng and Zhiwen Wang
Computation 2025, 13(4), 94; https://doi.org/10.3390/computation13040094 - 9 Apr 2025
Viewed by 59
Abstract
Accurate fault diagnosis in analog circuits faces significant challenges owing to the inherent complexity of fault data patterns and the limited feature representation capabilities of conventional methodologies. Addressing the limitations of current convolutional neural networks (CNN) in handling heterogeneous fault characteristics, this study [...] Read more.
Accurate fault diagnosis in analog circuits faces significant challenges owing to the inherent complexity of fault data patterns and the limited feature representation capabilities of conventional methodologies. Addressing the limitations of current convolutional neural networks (CNN) in handling heterogeneous fault characteristics, this study presents an efficient channel attention-enhanced multi-input CNN framework (ECA-MI-CNN) with dual-domain feature fusion, demonstrating three key innovations. First, the proposed framework addresses multi-domain feature extraction through parallel CNN branches specifically designed for processing time-domain and frequency-domain features, effectively preserving their distinct characteristic information. Second, the incorporation of an efficient channel attention (ECA) module between convolutional layers enables adaptive feature response recalibration, significantly enhancing discriminative feature learning while maintaining computational efficiency. Third, a hierarchical fusion strategy systematically integrates time-frequency domain features through concatenation and fully connected layer transformations prior to classification. Comprehensive simulation experiments conducted on Butterworth low-pass filters and two-stage quad op-amp dual second-order low-pass filters demonstrate the framework’s superior diagnostic capabilities. Real-world validation on Butterworth low-pass filters further reveals substantial performance advantages over existing methods, establishing an effective solution for complex fault pattern recognition in electronic systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

17 pages, 2802 KiB  
Article
Deep Multi-Component Neural Network Architecture
by Chafik Boulealam, Hajar Filali, Jamal Riffi, Adnane Mohamed Mahraz and Hamid Tairi
Computation 2025, 13(4), 93; https://doi.org/10.3390/computation13040093 - 8 Apr 2025
Viewed by 78
Abstract
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address [...] Read more.
Existing neural network architectures often struggle with two critical limitations: (1) information loss during dataset length standardization, where variable-length samples are forced into fixed dimensions, and (2) inefficient feature selection in single-modal systems, which treats all features equally regardless of relevance. To address these issues, this paper introduces the Deep Multi-Components Neural Network (DMCNN), a novel architecture that processes variable-length data by regrouping samples into components of similar lengths, thereby preserving information that traditional methods discard. DMCNN dynamically prioritizes task-relevant features through a component-weighting mechanism, which calculates the importance of each component via loss functions and adjusts weights using a SoftMax function. This approach eliminates the need for dataset standardization while enhancing meaningful features and suppressing irrelevant ones. Additionally, DMCNN seamlessly integrates multimodal data (e.g., text, speech, and signals) as separate components, leveraging complementary information to improve accuracy without requiring dimension alignment. Evaluated on the Multimodal EmotionLines Dataset (MELD) and CIFAR-10, DMCNN achieves state-of-the-art accuracy of 99.22% on MELD and 97.78% on CIFAR-10, outperforming existing methods like MNN and McDFR. The architecture’s efficiency is further demonstrated by its reduced trainable parameters and robust handling of multimodal and variable-length inputs, making it a versatile solution for classification tasks. Full article
Show Figures

Figure 1

25 pages, 1443 KiB  
Article
Predicting Urban Traffic Congestion with VANET Data
by Wilson Chango, Pamela Buñay, Juan Erazo, Pedro Aguilar, Jaime Sayago, Angel Flores and Geovanny Silva
Computation 2025, 13(4), 92; https://doi.org/10.3390/computation13040092 - 7 Apr 2025
Viewed by 292
Abstract
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating [...] Read more.
The purpose of this study lies in developing a comparison of neural network-based models for vehicular congestion prediction, with the aim of improving urban mobility and mitigating the negative effects associated with traffic, such as accidents and congestion. This research focuses on evaluating the effectiveness of different neural network architectures, specifically Transformer and LSTM, in order to achieve accurate and reliable predictions of vehicular congestion. To carry out this research, a rigorous methodology was employed that included a systematic literature review based on the PRISMA methodology, which allowed for the identification and synthesis of the most relevant advances in the field. Likewise, the Design Science Research (DSR) methodology was applied to guide the development and validation of the models, and the CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology was used to structure the process, from understanding the problem to implementing the solutions. The dataset used in this study included key variables related to traffic, such as vehicle speed, vehicular flow, and weather conditions. These variables were processed and normalized to train and evaluate various neural network architectures, highlighting LSTM and Transformer networks. The results obtained demonstrated that the LSTM-based model outperformed the Transformer model in the task of congestion prediction. Specifically, the LSTM model achieved an accuracy of 0.9463, with additional metrics such as a loss of 0.21, an accuracy of 0.93, a precision of 0.29, a recall of 0.71, an F1-score of 0.42, an MSE of 0.07, and an RMSE of 0.26. In conclusion, this study demonstrates that the LSTM-based model is highly effective for predicting vehicular congestion, surpassing other architectures such as Transformer. The integration of this model into a simulation environment showed that real-time traffic information can significantly improve urban mobility management. These findings support the utility of neural network architectures in sustainable urban planning and intelligent traffic management, opening new perspectives for future research in this field. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

22 pages, 10018 KiB  
Article
Eye Care: Predicting Eye Diseases Using Deep Learning Based on Retinal Images
by Araek Tashkandi
Computation 2025, 13(4), 91; https://doi.org/10.3390/computation13040091 - 3 Apr 2025
Viewed by 141
Abstract
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect [...] Read more.
Eye illness detection is important, yet it can be difficult and error-prone. In order to effectively and promptly diagnose eye problems, doctors must use cutting-edge technologies. The goal of this research paper is to develop a sophisticated model that will help physicians detect different eye conditions early on. These conditions include age-related macular degeneration (AMD), diabetic retinopathy, cataracts, myopia, and glaucoma. Common eye conditions include cataracts, which cloud the lens and cause blurred vision, and glaucoma, which can cause vision loss due to damage to the optic nerve. The two conditions that could cause blindness if treatment is not received are age-related macular degeneration (AMD) and diabetic retinopathy, a side effect of diabetes that destroys the blood vessels in the retina. Problems include myopic macular degeneration, glaucoma, and retinal detachment—severe types of nearsightedness that are typically defined as having a refractive error of –5 diopters or higher—are also more likely to occur in people with high myopia. We intend to apply a user-friendly approach that will allow for faster and more efficient examinations. Our research attempts to streamline the eye examination procedure, making it simpler and more accessible than traditional hospital approaches. Our goal is to use deep learning and machine learning to develop an extremely accurate model that can assess medical images, such as eye retinal scans. This was accomplished by using a huge dataset to train the machine learning and deep learning model, as well as sophisticated image processing techniques to assist the algorithm in identifying patterns of various eye illnesses. Following training, we discovered that the CNN, VggNet, MobileNet, and hybrid Deep Learning models outperformed the SVM and Random Forest machine learning models in terms of accuracy, achieving above 98%. Therefore, our model could assist physicians in enhancing patient outcomes, raising survival rates, and creating more effective treatment plans for patients with these illnesses. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

17 pages, 6320 KiB  
Article
Oscillation Flow of Viscous Electron Fluids in Conductors of Rectangular Cross-Section
by Andriy A. Avramenko, Igor V. Shevchuk, Nataliia P. Dmitrenko, Andriy I. Tyrinov, Yiliia Y. Kovetska and Andriy S. Kobzar
Computation 2025, 13(4), 90; https://doi.org/10.3390/computation13040090 - 1 Apr 2025
Viewed by 89
Abstract
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method [...] Read more.
The article presents results of an analytical and numerical modeling of electron fluid motion and heat generation in a rectangular conductor at an alternating electric potential. The analytical solution is based on the series expansion solution (Fourier method) and double series solution (method of eigenfunction decomposition). The numerical solution is based on the lattice Boltzmann method (LBM). An analytical solution for the electric current was obtained. This enables estimating the heat generation in the conductor and determining the influence of the parameters characterizing the conductor dimensions, the parameter M (phenomenological transport time describing momentum-nonconserving collisions), the Knudsen number (mean free path for momentum-nonconserving) and the Sh number (frequency) on the heat generation rate as an electron flow passes through a conductor. Full article
Show Figures

Figure 1

9 pages, 224 KiB  
Article
Invariance of Stationary Distributions of Exponential Networks with Prohibitions and Determination of Maximum Prohibitions
by Gurami Tsitsiashvili and Marina Osipova
Computation 2025, 13(4), 89; https://doi.org/10.3390/computation13040089 - 1 Apr 2025
Viewed by 26
Abstract
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the [...] Read more.
The paper considers queuing networks with prohibitions on transitions between network nodes that determine the protocol of their operation. In the graph of transient network intensities, a set of base vertices is allocated (proportional to the number of edges), and we raise the question of whether some subset of it can be deleted such that the stationary distribution of the Markov process describing the functioning of the network is preserved. In order for this condition to be fulfilled, it is sufficient that the set of vertices of the graph of transient intensities, after the removal of a subset of the base vertices, coincide with the set of states of the Markov process and that this graph be connected. It is proved that the ratio of the number of remaining base vertices to their total number n converges to one-half for n. In this paper, we are looking for graphs of transient intensities with a minimum (in some sense) set of edges for open and closed service networks. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

15 pages, 766 KiB  
Article
MedMAE: A Self-Supervised Backbone for Medical Imaging Tasks
by Anubhav Gupta, Islam Osman, Mohamed S. Shehata, W. John Braun and Rebecca E. Feldman
Computation 2025, 13(4), 88; https://doi.org/10.3390/computation13040088 - 1 Apr 2025
Viewed by 109
Abstract
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is [...] Read more.
Medical imaging tasks are very challenging due to the lack of publicly available labeled datasets. Hence, it is difficult to achieve high performance with existing deep learning models as they require a massive labeled dataset to be trained effectively. An alternative solution is to use pre-trained models and fine-tune them using a medical imaging dataset. However, all existing models are pre-trained using natural images, which represent a different domain from that of medical imaging; this leads to poor performance due to domain shift. To overcome these problems, we propose a pre-trained backbone using a collected medical imaging dataset with a self-supervised learning tool called a masked autoencoder. This backbone can be used as a pre-trained model for any medical imaging task, as it is trained to learn a visual representation of different types of medical images. To evaluate the performance of the proposed backbone, we use four different medical imaging tasks. The results are compared with existing pre-trained models. These experiments show the superiority of our proposed backbone in medical imaging tasks. Full article
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)
Show Figures

Figure 1

21 pages, 329 KiB  
Article
Subsequential Continuity in Neutrosophic Metric Space with Applications
by Vishal Gupta, Nitika Garg and Rahul Shukla
Computation 2025, 13(4), 87; https://doi.org/10.3390/computation13040087 - 25 Mar 2025
Viewed by 123
Abstract
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem [...] Read more.
This paper introduces two concepts, subcompatibility and subsequential continuity, which are, respectively, weaker than the existing concepts of occasionally weak compatibility and reciprocal continuity. These concepts are studied within the framework of neutrosophic metric spaces. Using these ideas, a common fixed point theorem is developed for a system involving four maps. Furthermore, the results are applied to solve the Volterra integral equation, demonstrating the practical use of these findings in neutrosophic metric spaces. Full article
(This article belongs to the Special Issue Nonlinear System Modelling and Control)
22 pages, 1039 KiB  
Article
A Machine Learning-Based Computational Methodology for Predicting Acute Respiratory Infections Using Social Media Data
by Jose Manuel Ramos-Varela, Juan C. Cuevas-Tello and Daniel E. Noyola
Computation 2025, 13(4), 86; https://doi.org/10.3390/computation13040086 - 25 Mar 2025
Viewed by 152
Abstract
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data. [...] Read more.
We study the relationship between tweets referencing Acute Respiratory Infections (ARI) or COVID-19 symptoms and confirmed cases of these diseases. Additionally, we propose a computational methodology for selecting and applying Machine Learning (ML) algorithms to predict public health indicators using social media data. To achieve this, a novel pipeline was developed, integrating three distinct models to predict confirmed cases of ARI and COVID-19. The dataset contains tweets related to respiratory diseases, published between 2020 and 2022 in the state of San Luis Potosí, Mexico, obtained via the Twitter API (now X). The methodology is composed of three stages, and it involves tools such as Dataiku and Python with ML libraries. The first two stages focuses on identifying the best-performing predictive models, while the third stage includes Natural Language Processing (NLP) algorithms for tweet selection. One of our key findings is that tweets contributed to improved predictions of ARI confirmed cases but did not enhance COVID-19 time series predictions. The best-performing NLP approach is the combination of Word2Vec algorithm with the KMeans model for tweet selection. Furthermore, predictions for both time series improved by 3% in the second half of 2020 when tweets were included as a feature, where the best prediction algorithm is DeepAR. Full article
(This article belongs to the Special Issue Feature Papers in Computational Biology)
Show Figures

Figure 1

18 pages, 15002 KiB  
Article
Numerical Analysis of the Impact of Variable Borer Miner Operating Modes on the Microclimate in Potash Mine Working Areas
by Lev Levin, Mikhail Semin, Stanislav Maltsev, Roman Luzin and Andrey Sukhanov
Computation 2025, 13(4), 85; https://doi.org/10.3390/computation13040085 - 24 Mar 2025
Viewed by 104
Abstract
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching [...] Read more.
This paper addresses the numerical simulation of unsteady, non-isothermal ventilation in a dead-end mine working of a potash mine excavated using a borer miner. During its operations, airflow can become unsteady due to the variable operating modes of the borer miner, the switching on and off of its motor cooling fans, and the movement of a shuttle car transporting ore. While steady ventilation in a dead-end working with a borer miner has been previously studied, the specific features of air microclimate parameter distribution in more complex and realistic unsteady scenarios remain unexplored. Our experimental studies reveal that over time, air velocity and, particularly, air temperature experience significant fluctuations. In this study, we develop and parameterize a mathematical model and perform a series of numerical simulations of unsteady heat and mass transfer in a dead-end working. These simulations account for the switching on and off of the borer miner’s fans and the movement of the shuttle car. The numerical model is calibrated using data from our experiments conducted in a potash mine. The analysis of the first factor is carried out by examining two extreme scenarios under steady-state ventilation conditions, while the second factor is analyzed within a fully unsteady framework using a dynamic mesh approach in the ANSYS Fluent 2021 R2. The numerical results demonstrate that the borer miner’s operating mode notably impacts the velocity and temperature fields, with a twofold decrease in maximum velocity near the cabin after the shuttle car departed and a temperature difference of about 1–1.5 °C between extreme scenarios in the case of forcing ventilation. The unsteady simulations using the dynamic mesh approach revealed that temperature variations were primarily caused by the borer miner’s cooling system, while the moving shuttle car generated short-term aerodynamic oscillations. Full article
(This article belongs to the Special Issue Advances in Computational Methods for Fluid Flow)
Show Figures

Figure 1

9 pages, 915 KiB  
Article
Tree-Based Methods of Volatility Prediction for the S&P 500 Index
by Marin Lolic
Computation 2025, 13(4), 84; https://doi.org/10.3390/computation13040084 - 24 Mar 2025
Viewed by 164
Abstract
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and [...] Read more.
Predicting asset return volatility is one of the central problems in quantitative finance. These predictions are used for portfolio construction, calculation of value at risk (VaR), and pricing of derivatives such as options. Classical methods of volatility prediction utilize historical returns data and include the exponentially weighted moving average (EWMA) and generalized autoregressive conditional heteroskedasticity (GARCH). These approaches have shown significantly higher rates of predictive accuracy than corresponding methods of return forecasting, but they still have vast room for improvement. In this paper, we propose and test several methods of volatility forecasting on the S&P 500 Index using tree ensembles from machine learning, namely random forest and gradient boosting. We show that these methods generally outperform the classical approaches across a variety of metrics on out-of-sample data. Finally, we use the unique properties of tree-based ensembles to assess what data can be particularly useful in predicting asset return volatility. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research: 2nd Edition)
Show Figures

Figure 1

19 pages, 1891 KiB  
Article
A High-Order Hybrid Approach Integrating Neural Networks and Fast Poisson Solvers for Elliptic Interface Problems
by Yiming Ren and Shan Zhao
Computation 2025, 13(4), 83; https://doi.org/10.3390/computation13040083 - 23 Mar 2025
Viewed by 107
Abstract
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on [...] Read more.
A new high-order hybrid method integrating neural networks and corrected finite differences is developed for solving elliptic equations with irregular interfaces and discontinuous solutions. Standard fourth-order finite difference discretization becomes invalid near such interfaces due to the discontinuities and requires corrections based on Cartesian derivative jumps. In traditional numerical methods, such as the augmented matched interface and boundary (AMIB) method, these derivative jumps can be reconstructed via additional approximations and are solved together with the unknown solution in an iterative procedure. Nontrivial developments have been carried out in the AMIB method in treating sharply curved interfaces, which, however, may not work for interfaces with geometric singularities. In this work, machine learning techniques are utilized to directly predict these Cartesian derivative jumps without involving the unknown solution. To this end, physics-informed neural networks (PINNs) are trained to satisfy the jump conditions for both closed and open interfaces with possible geometric singularities. The predicted Cartesian derivative jumps can then be integrated in the corrected finite differences. The resulting discrete Laplacian can be efficiently solved by fast Poisson solvers, such as fast Fourier transform (FFT) and geometric multigrid methods, over a rectangular domain with Dirichlet boundary conditions. This hybrid method is both easy to implement and efficient. Numerical experiments in two and three dimensions demonstrate that the method achieves fourth-order accuracy for the solution and its derivatives. Full article
Show Figures

Figure 1

13 pages, 2116 KiB  
Article
Numerical Simulation of Capture of Diffusing Particles in Porous Media
by Valeriy E. Arkhincheev, Bair V. Khabituev and Stanislav P. Maltsev
Computation 2025, 13(4), 82; https://doi.org/10.3390/computation13040082 - 22 Mar 2025
Viewed by 185
Abstract
Numerical modeling was conducted to study the capture of particles diffusing in porous media with traps. The pores are cylindrical in shape, and the traps are randomly distributed along the cylindrical surfaces of the pores. The dynamics of particle capture by the traps [...] Read more.
Numerical modeling was conducted to study the capture of particles diffusing in porous media with traps. The pores are cylindrical in shape, and the traps are randomly distributed along the cylindrical surfaces of the pores. The dynamics of particle capture by the traps, as well as the filling of the traps, were investigated. In general, the decrease in the number of particles follows an exponential trend, with a characteristic time determined by the trap concentration. However, at longer times, extended plateaus emerge in the particle distribution function. Additionally, the dynamics of the interface boundary corresponding to the median trap filling (M = 0.5) were examined. This interface separates regions where traps are filled with a probability greater than 0.5 from regions where traps are filled with a probability less than 0.5. The motion of the interface over time was found to follow a logarithmic dependence. The influence of the radius of the pore on the capture on traps, which are placed on the internal surface of the cylinders, was investigated. The different dependencies of the extinction time on the number of traps were found at different radii of pores the first time. Full article
Show Figures

Figure 1

16 pages, 347 KiB  
Article
Introducing Monotone Enriched Nonexpansive Mappings for Fixed Point Approximation in Ordered CAT(0) Spaces
by Safeer Hussain Khan, Rizwan Anjum and Nimra Ismail
Computation 2025, 13(4), 81; https://doi.org/10.3390/computation13040081 - 21 Mar 2025
Viewed by 221
Abstract
The aim of this paper is twofold: introducing the concept of monotone enriched nonexpansive mappings and a faster iterative process. Our examples illustrate the novelty of our newly introduced concepts. We investigate the iterative estimation of fixed points for such mappings for the [...] Read more.
The aim of this paper is twofold: introducing the concept of monotone enriched nonexpansive mappings and a faster iterative process. Our examples illustrate the novelty of our newly introduced concepts. We investigate the iterative estimation of fixed points for such mappings for the first time within an ordered CAT(0) space. It is done by proving some strong and Δ-convergence theorems. Additionally, numerical experiments are included to demonstrate the validity of our theoretical results and to establish the superiority of convergence behavior of our iterative process. As an application, we use our newly introduced concepts to find the solution of an integral equation. The outcomes of our study expand upon and enhance certain established findings in the current body of literature. Full article
Show Figures

Figure 1

17 pages, 1513 KiB  
Article
Cascade-Based Input-Doubling Classifier for Predicting Survival in Allogeneic Bone Marrow Transplants: Small Data Case
by Ivan Izonin, Roman Tkachenko, Nazarii Hovdysh, Oleh Berezsky, Kyrylo Yemets and Ivan Tsmots
Computation 2025, 13(4), 80; https://doi.org/10.3390/computation13040080 - 21 Mar 2025
Viewed by 179
Abstract
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing [...] Read more.
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing risks, and tailoring treatment plans. However, the inherent limitations of small datasets present significant obstacles. This paper introduces an advanced input-doubling classifier designed to improve survival predictions for allogeneic bone marrow transplants. The approach utilizes two artificial intelligence tools: the first Probabilistic Neural Network generates output signals that expand the independent attributes of an augmented dataset, while the second machine learning algorithm performs the final classification. This method, based on the cascading principle, facilitates the development of novel algorithms for preparing and applying the enhanced input-doubling technique to classification tasks. The proposed method was tested on a small dataset within transplantology, focusing on binary classification. Optimal parameters for the method were identified using the Dual Annealing algorithm. Comparative analysis of the improved method against several existing approaches revealed a substantial improvement in accuracy across various performance metrics, underscoring its practical benefits Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

Previous Issue
Back to TopTop