Previous Issue
Volume 10, SCGT'2025
 
 

Comput. Sci. Math. Forum, 2025, ITISE 2025

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Number of Papers: 28
Order results
Result details
Select all
Export citation of selected articles as:

Other

11 pages, 727 KB  
Proceeding Paper
Evaluating Sales Forecasting Methods in Make-to-Order Environments: A Cross-Industry Benchmark Study
by Marius Syberg, Lucas Polley and Jochen Deuse
Comput. Sci. Math. Forum 2025, 11(1), 1; https://doi.org/10.3390/cmsf2025011001 - 25 Jul 2025
Viewed by 477
Abstract
Sales forecasting in make-to-order (MTO) production is particularly challenging for small- and medium-sized enterprises (SMEs) due to high product customization, volatile demand, and limited historical data. This study evaluates the practical feasibility and accuracy of statistical and machine learning (ML) forecasting methods in [...] Read more.
Sales forecasting in make-to-order (MTO) production is particularly challenging for small- and medium-sized enterprises (SMEs) due to high product customization, volatile demand, and limited historical data. This study evaluates the practical feasibility and accuracy of statistical and machine learning (ML) forecasting methods in MTO settings across three manufacturing sectors: electrical equipment, steel, and office supplies. A cross-industry benchmark assesses models such as ARIMA, Holt–Winters, Random Forest, LSTM, and Facebook Prophet. The evaluation considers error metrics (MAE, RMSE, and sMAPE) as well as implementation aspects like computational demand and interpretability. Special attention is given to data sensitivity and technical limitations typical in SMEs. The findings show that ML models perform well under high volatility and when enriched with external indicators, but they require significant expertise and resources. In contrast, simpler statistical methods offer robust performance in more stable or seasonal demand contexts and are better suited in certain cases. The study emphasizes the importance of transparency, usability, and trust in forecasting tools and offers actionable recommendations for selecting a suitable forecasting configuration based on context. By aligning technical capabilities with operational needs, this research supports more effective decision-making in data-constrained MTO environments. Full article
Show Figures

Figure 1

10 pages, 775 KB  
Proceeding Paper
An Estimation of Risk Measures: Analysis of a Method
by Marta Ferreira and Liliana Monteiro
Comput. Sci. Math. Forum 2025, 11(1), 2; https://doi.org/10.3390/cmsf2025011002 - 25 Jul 2025
Viewed by 84
Abstract
Extreme value theory comprises a set of techniques for inference at the tail of distributions, where data are scarce or non-existent. The tail index is the main parameter, with risk measures such as value at risk or expected shortfall depending on it. In [...] Read more.
Extreme value theory comprises a set of techniques for inference at the tail of distributions, where data are scarce or non-existent. The tail index is the main parameter, with risk measures such as value at risk or expected shortfall depending on it. In this study, we will analyze a method for estimating the tail index through a simulation study. This will allow for an application using real data including the estimation of the mentioned risk measures. Full article
Show Figures

Figure 1

21 pages, 343 KB  
Proceeding Paper
Detecting Financial Bubbles with Tail-Weighted Entropy
by Omid M. Ardakani
Comput. Sci. Math. Forum 2025, 11(1), 3; https://doi.org/10.3390/cmsf2025011003 - 25 Jul 2025
Viewed by 175
Abstract
This paper develops a novel entropy-based framework to quantify tail risk and detect speculative bubbles in financial markets. By integrating extreme value theory with information theory, I introduce the Tail-Weighted Entropy (TWE) measure, which captures how information scales with extremeness in asset price [...] Read more.
This paper develops a novel entropy-based framework to quantify tail risk and detect speculative bubbles in financial markets. By integrating extreme value theory with information theory, I introduce the Tail-Weighted Entropy (TWE) measure, which captures how information scales with extremeness in asset price distributions. I derive explicit bounds for TWE under heavy-tailed models and establish its connection to tail index parameters, revealing a phase transition in entropy decay rates during bubble formation. Empirically, I demonstrate that TWE-based signals detect crises in equities, commodities, and cryptocurrencies days earlier than traditional variance-ratio tests, with Bitcoin’s 2021 collapse identified weeks prior to the peak. The results show that entropy decay—not volatility explosions—serves as the primary precursor to systemic risk, offering policymakers a robust tool for preemptive crisis management. Full article
10 pages, 1431 KB  
Proceeding Paper
Time Series Forecasting for Touristic Policies
by Konstantinos Mavrogiorgos, Athanasios Kiourtis, Argyro Mavrogiorgou, Dimitrios Apostolopoulos, Andreas Menychtas and Dimosthenis Kyriazis
Comput. Sci. Math. Forum 2025, 11(1), 4; https://doi.org/10.3390/cmsf2025011004 - 30 Jul 2025
Viewed by 118
Abstract
The formulation of touristic policies is a time-consuming process that consists of a wide range of steps and procedures. These policies are highly dependent on the number of tourists and visitors to an area to be as effective as possible. The estimation of [...] Read more.
The formulation of touristic policies is a time-consuming process that consists of a wide range of steps and procedures. These policies are highly dependent on the number of tourists and visitors to an area to be as effective as possible. The estimation of this number is not always easy to achieve, since there is a lack of the corresponding data (i.e., number of visitors per day). Hence, this estimation must be achieved by utilizing alternative data sources. To this end, in this paper, the authors propose a neural network architecture that is trained on waste management data to estimate the number of visitors and tourists in the highly touristic municipality of Vari-Voula-Vouliagmeni, Greece. Full article
Show Figures

Figure 1

9 pages, 607 KB  
Proceeding Paper
Nonlinear Dynamic Inverse Solution of the Diffusion Problem Based on Krylov Subspace Methods with Spatiotemporal Constraints
by Luis Fernando Alvarez-Velasquez and Eduardo Giraldo
Comput. Sci. Math. Forum 2025, 11(1), 5; https://doi.org/10.3390/cmsf2025011005 - 30 Jul 2025
Viewed by 127
Abstract
In this work, we propose a nonlinear dynamic inverse solution to the diffusion problem based on Krylov Subspace Methods with spatiotemporal constraints. The proposed approach is applied by considering, as a forward problem, a 1D diffusion problem with a nonlinear diffusion model. The [...] Read more.
In this work, we propose a nonlinear dynamic inverse solution to the diffusion problem based on Krylov Subspace Methods with spatiotemporal constraints. The proposed approach is applied by considering, as a forward problem, a 1D diffusion problem with a nonlinear diffusion model. The dynamic inverse problem solution is obtained by considering a cost function with spatiotemporal constraints, where the Krylov subspace method named the Generalized Minimal Residual method is applied by considering a linearized diffusion model and spatiotemporal constraints. In addition, a Jacobian-based preconditioner is used to improve the convergence of the inverse solution. The proposed approach is evaluated under noise conditions by considering the reconstruction error and the relative residual error. It can be seen that the performance of the proposed approach is better when used with the preconditioner for the nonlinear diffusion model under noise conditions in comparison with the system without the preconditioner. Full article
Show Figures

Figure 1

16 pages, 2538 KB  
Proceeding Paper
Comparative Analysis of Temperature Prediction Models: Simple Models vs. Deep Learning Models
by Zibo Wang, Weiqi Zhang and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 6; https://doi.org/10.3390/cmsf2025011006 - 30 Jul 2025
Viewed by 139
Abstract
Accurate and concise temperature prediction models have important applications in meteorological science, agriculture, energy, and electricity. This study aims to compare the performance of simple models and deep learning models in temperature prediction and explore whether simple models can replace deep learning models [...] Read more.
Accurate and concise temperature prediction models have important applications in meteorological science, agriculture, energy, and electricity. This study aims to compare the performance of simple models and deep learning models in temperature prediction and explore whether simple models can replace deep learning models in specific scenarios to save computing resources. Based on 37 years of daily temperature time series data from 10 cities from 1987 to 2024, the Simple Moving Average (SMA), Seasonal Average Method with Lookback Years (SAM-Lookback), and Long Short-Term Memory (LSTM) models are fitted to evaluate the accuracy of simple models and deep learning models in temperature prediction. The performance of different models is intuitively compared by calculating the RMSE and Percentage Error of each city. The results show that LSTM has higher accuracy in most cities, but the prediction results of SMA and LSTM are similar and perform equally well, while SAM-Lookback is relatively weak. Full article
Show Figures

Figure 1

10 pages, 403 KB  
Proceeding Paper
Assessing the Oil Price–Exchange Rate Nexus: A Switching Regime Evidence Using Fractal Regression
by Sami Diaf and Rachid Toumache
Comput. Sci. Math. Forum 2025, 11(1), 7; https://doi.org/10.3390/cmsf2025011007 - 31 Jul 2025
Viewed by 111
Abstract
Oil, as a key commodity in international markets, bears an importance for both producers and consumers. For oil-exporting countries, periodic fluctuations have a considerable impact on the economic status and the way monetary and fiscal policies should be conducted in the future. While [...] Read more.
Oil, as a key commodity in international markets, bears an importance for both producers and consumers. For oil-exporting countries, periodic fluctuations have a considerable impact on the economic status and the way monetary and fiscal policies should be conducted in the future. While most of academic efforts tried to link low-frequency real exchange rate with macroeconomic fundamentals for medium-/long-term inference, they omitted to gauge the volatile and complex high-frequency linkage between oil prices and exchange rate fluctuations. The inherent non-linear characteristics of such time series preclude the use of traditional tools or aggregated schemes based on lower frequencies for inference purposes. This work investigates the scale-based volatile linkage between daily international oil fluctuations and nominal exchange rate variations of an oil-exporting country, namely Algeria, by adopting a fractal regression approach to uncover the power-law, time-varying transmission and track its incidence in the short and long runs. Results show the absence of any short-term transmission mechanism from oil prices to the exchange rate, as the two variables remain decoupled but exhibit an increasing negative correlation when long scales are considered. Furthermore, the multiscale regression analysis confirms the existence of a scale-free, two-state Markov switching regime process generating short- and long-term impacts with sizeable amplitudes. The findings confirm the usefulness of monetary policy interventions to stabilize the local currency, as the source of Dollar–Dinar multifractality was found to be the probability distribution of observations rather than long-range correlations specific to oil prices. Full article
Show Figures

Figure 1

10 pages, 621 KB  
Proceeding Paper
An Autoregressive Moving Average Model for Time Series with Irregular Time Intervals
by Diana Alejandra Godoy Pulecio and César Andrés Ojeda Echeverri
Comput. Sci. Math. Forum 2025, 11(1), 8; https://doi.org/10.3390/cmsf2025011008 - 31 Jul 2025
Viewed by 126
Abstract
This research focuses on the study of stochastic processes with irregularly spaced time intervals, which is present in a wide range of fields such as climatology, astronomy, medicine, and economics. Some studies have proposed irregular autoregressive (iAR) and moving average (iMA) models separately, [...] Read more.
This research focuses on the study of stochastic processes with irregularly spaced time intervals, which is present in a wide range of fields such as climatology, astronomy, medicine, and economics. Some studies have proposed irregular autoregressive (iAR) and moving average (iMA) models separately, and moving average autoregressive processes (iARMA) for positive autoregressions. The objective of this work is to generalize the iARMA model to include negative correlations. A first-order moving average autoregressive model for irregular discrete time series is presented, being an ergodic and strictly stationary Gaussian process. Parameter estimation is performed by Maximum Likelihood, and its performances are evaluated for finite samples through Monte Carlo simulations. The estimation of the autocorrelation function (ACF) is performed using the DCF (Discrete Correlation Function) estimator, evaluating its performance by varying the sample size and average time interval. The model was implemented on real data from two different contexts; the first one consists of the two-week measurement of star flares of the Orion Nebula in the development of the COUP and the second pertains to the measurement of sunspot cycles from 1860 to 1990 and their relationship to temperature variation in the northern hemisphere. Full article
Show Figures

Figure 1

12 pages, 1701 KB  
Proceeding Paper
Analyzing and Classifying Time-Series Trends in Medals
by Minfei Liang, Yu Gao and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 9; https://doi.org/10.3390/cmsf2025011009 - 31 Jul 2025
Viewed by 197
Abstract
Since the 19th century, the development of metallurgical technology has been influenced by various factors, such as materials, casting technology, political policies, and the economic development of different countries. This paper aims to analyze the time-series evolution trend in medal issues in different [...] Read more.
Since the 19th century, the development of metallurgical technology has been influenced by various factors, such as materials, casting technology, political policies, and the economic development of different countries. This paper aims to analyze the time-series evolution trend in medal issues in different countries and explore their historical and commemorative significance. Taking the characteristics of medal production places, types, compositions, diameters, weights, shapes, compositions, and thicknesses between 1850 and 2025 as indicators, data analysis methods such as time series, hierarchical cluster analysis (HCA), logistic regression, and random forests are used to study the process of medal development and influencing factors in the past 175 years. The results show that compared with the pre-World War II period, the weight and diameter of all medals of major countries changed significantly in different periods. Moreover, before and after World War II, there was a shift from traditional materials to cost-effective and convenient alternatives. Full article
Show Figures

Figure 1

8 pages, 481 KB  
Proceeding Paper
Monitoring Multidimensional Risk in the Economy
by Alexander Tyrsin, Michail Gerasimov and Michael Beer
Comput. Sci. Math. Forum 2025, 11(1), 10; https://doi.org/10.3390/cmsf2025011010 - 31 Jul 2025
Viewed by 84
Abstract
In economics, risk analysis is often associated with certain difficulties. These include the presence of several correlated risk factors, non-stationarity of economic processes, and small data samples. A mathematical model of multidimensional risk is described which satisfies the main features of processes in [...] Read more.
In economics, risk analysis is often associated with certain difficulties. These include the presence of several correlated risk factors, non-stationarity of economic processes, and small data samples. A mathematical model of multidimensional risk is described which satisfies the main features of processes in the economy. In the task of risk monitoring, we represent the analyzed factors as a set of correlated non-stationary time series. The method allows us to assess the risk at each moment using small data samples. For this purpose, risk factors are locally described in the form of parabolic or linear trends. An example of monitoring the risk of reducing the level of socio-economic development of Russia in 2000–2023 is considered. The monitoring results showed that the proposed multivariate risk model was generally sensitive to all the most significant economic shocks and adequately responded to them. Full article
Show Figures

Figure 1

13 pages, 3038 KB  
Proceeding Paper
Inclusive Turnout for Equitable Policies: Using Time Series Forecasting to Combat Policy Polarization
by Natasya Liew, Sreeya R. K. Haninatha, Sarthak Pattnaik, Kathleen Park and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 11; https://doi.org/10.3390/cmsf2025011011 - 1 Aug 2025
Viewed by 92
Abstract
Selective voter mobilization dominates U.S. elections, with campaigns prioritizing swing voters to win critical states. While effective for a short-term period, this strategy deepens policy polarization, marginalizes minorities, and undermines representative democracy. This paper investigates voter turnout disparities and policy manipulation using advanced [...] Read more.
Selective voter mobilization dominates U.S. elections, with campaigns prioritizing swing voters to win critical states. While effective for a short-term period, this strategy deepens policy polarization, marginalizes minorities, and undermines representative democracy. This paper investigates voter turnout disparities and policy manipulation using advanced time series forecasting models (ARIMA, LSTM, and seasonal decomposition). Analyzing demographic and geographic data, we uncover significant turnout inequities, particularly for marginalized groups, and propose actionable reforms to enhance equitable voter participation. By integrating data-driven insights with theoretical perspectives, this study offers practical recommendations for campaigns and policymakers to counter polarization and foster inclusive democratic representation. Full article
Show Figures

Figure 1

9 pages, 543 KB  
Proceeding Paper
Modeling South African Stock Prices with Mixture Distributions
by Martin Chanza and Modisane Seitshiro
Comput. Sci. Math. Forum 2025, 11(1), 12; https://doi.org/10.3390/cmsf2025011012 - 31 Jul 2025
Viewed by 91
Abstract
This study investigates the behavior of South African stock prices during divestment periods using mixture distributions. Divestment often triggers significant market reactions, necessitating a deeper understanding of stock return distributions in such events. Given the complexities of emerging markets like South Africa, this [...] Read more.
This study investigates the behavior of South African stock prices during divestment periods using mixture distributions. Divestment often triggers significant market reactions, necessitating a deeper understanding of stock return distributions in such events. Given the complexities of emerging markets like South Africa, this research models stock price behavior to assess associated risks. A mixture distribution approach is employed to capture the return dynamics of stocks listed on the Johannesburg Stock Exchange (JSE) between 2015 and 2024. Gaussian Mixture Models (GMMs), Lognormal Mixture, and Student’s t mixture models are applied to financial, technology, and energy stocks affected by divestment. Statistical tests including AIC and BIC assess the model performance. Mixture distributions outperform single-distribution models, effectively capturing heavy tails, volatility clustering, and asymmetry in stock returns. The GMM and Student’s t mixture models provide the best fit, revealing increased volatility and extreme negative returns following divestment events. Mixture distributions offer a robust framework for modeling South African stock prices during divestment periods. These models enhance the understanding of market dynamics, supporting better financial modeling and risk management in emerging markets. Full article
Show Figures

Figure 1

11 pages, 391 KB  
Proceeding Paper
The Forecasting of Aluminum Prices: A True Challenge for Econometric Models
by Krzysztof Drachal and Joanna Jędrzejewska
Comput. Sci. Math. Forum 2025, 11(1), 13; https://doi.org/10.3390/cmsf2025011013 - 31 Jul 2025
Viewed by 173
Abstract
This paper explores the forecasting of aluminum prices using various predictive models dealing with variable uncertainty. A diverse set of economic and market indicators is considered as potential price predictors. The performance of models including LASSO, RIDGE regression, time-varying parameter regressions, LARS, ARIMA, [...] Read more.
This paper explores the forecasting of aluminum prices using various predictive models dealing with variable uncertainty. A diverse set of economic and market indicators is considered as potential price predictors. The performance of models including LASSO, RIDGE regression, time-varying parameter regressions, LARS, ARIMA, Dynamic Model Averaging, Bayesian Model Averaging, etc., is compared according to forecast accuracy. Despite the initial expectations that Bayesian dynamic mixture models would provide the best forecast accuracy, the results indicate that forecasting by futures prices and with Dynamic Model Averaging outperformed all other methods when monthly average prices are considered. Contrary, when monthly closing spot prices are considered, Bayesian dynamic mixture models happen to be very accurate compared to other methods, although beating the no-change method is still a hard challenge. Additionally, both revised and originally published macroeconomic time-series data are analyzed, ensuring consistency with the information available during real-time forecasting by mimicking the perspective of market players in the past. Full article
Show Figures

Figure 1

10 pages, 1811 KB  
Proceeding Paper
Beyond the Hodrick Prescott Filter: Wavelets and the Dynamics of U.S.–Mexico Trade
by José Gerardo Covarrubias and Xuedong Liu
Comput. Sci. Math. Forum 2025, 11(1), 14; https://doi.org/10.3390/cmsf2025011014 - 1 Aug 2025
Viewed by 90
Abstract
This study analyzes the evolution of the Mexico–U.S. trade balance as a seasonally adjusted time series, comparing the Hodrick–Prescott (HP) filter and wavelet analysis. The HP filter allowed the trend and cycle to be extracted from the series, while wavelets decomposed the information [...] Read more.
This study analyzes the evolution of the Mexico–U.S. trade balance as a seasonally adjusted time series, comparing the Hodrick–Prescott (HP) filter and wavelet analysis. The HP filter allowed the trend and cycle to be extracted from the series, while wavelets decomposed the information into different time scales, revealing short-, medium-, and long-term fluctuations. The results show that HP provides a simplified view of the trend, while wavelets more accurately capture key events and cyclical dynamics. It is concluded that wavelets offer a more robust tool for studying the volatility and persistence of economic shocks in bilateral trade. Full article
Show Figures

Figure 1

9 pages, 1187 KB  
Proceeding Paper
Leveraging Exogenous Regressors in Demand Forecasting
by S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 15; https://doi.org/10.3390/cmsf2025011015 - 1 Aug 2025
Viewed by 174
Abstract
Demand forecasting is different from traditional forecasting because it is a process of forecasting multiple time series collectively. It is challenging to implement models that can generalise and perform well while forecasting many time series altogether, based on accuracy and scalability. Moreover, there [...] Read more.
Demand forecasting is different from traditional forecasting because it is a process of forecasting multiple time series collectively. It is challenging to implement models that can generalise and perform well while forecasting many time series altogether, based on accuracy and scalability. Moreover, there can be external influences like holidays, disasters, promotions, etc., creating drifts and structural breaks, making accurate demand forecasting a challenge. Again, these external features used for multivariate forecasting often worsen the prediction accuracy because there are more unknowns in the forecasting process. This paper attempts to explore effective ways of leveraging the exogenous regressors to surpass the accuracy of the univariate approach by creating synthetic scenarios to understand the model and regressors’ performances. This paper finds that the forecastability of the correlated external features plays a big role in determining whether it would improve or worsen accuracy for models like ARIMA, yet even 100% accurately forecasted extra regressors sometimes fail to surpass their univariate predictive accuracy. The findings are replicated in cases like forecasting weekly docked bike demand per station every hour, where the multivariate approach outperformed the univariate approach by forecasting the regressors with Bi-LSTM and using their predicted values for forecasting the target demand with ARIMA. Full article
Show Figures

Figure 1

11 pages, 3342 KB  
Proceeding Paper
Fundamentals of Time Series Analysis in Electricity Price Forecasting
by Ciaran O’Connor, Andrea Visentin and Steven Prestwich
Comput. Sci. Math. Forum 2025, 11(1), 16; https://doi.org/10.3390/cmsf2025011016 - 11 Aug 2025
Viewed by 162
Abstract
Time series forecasting is a cornerstone of decision-making in energy and finance, yet many studies fail to rigorously analyse the underlying dataset characteristics, leading to suboptimal model selection and unreliable outcomes. This paper addresses these shortcomings by presenting a comprehensive framework that integrates [...] Read more.
Time series forecasting is a cornerstone of decision-making in energy and finance, yet many studies fail to rigorously analyse the underlying dataset characteristics, leading to suboptimal model selection and unreliable outcomes. This paper addresses these shortcomings by presenting a comprehensive framework that integrates fundamental time series diagnostics—stationarity tests, autocorrelation analysis, heteroscedasticity, multicollinearity, and correlation analysis—into forecasting workflows. Unlike existing studies that prioritise pre-packaged machine learning and deep learning methods, often at the expense of interpretable statistical benchmarks, our approach advocates for the combined use of statistical models alongside advanced machine learning methods. Using the Day-Ahead Market dataset from the Irish electricity market as a case study, we demonstrate how rigorous statistical diagnostics can guide model selection, improve interpretability, and improve forecasting accuracy. This work offers a novel, integrative methodology that bridges the gap between statistical rigour and modern computational techniques, improving reliability in time series forecasting. Full article
Show Figures

Figure 1

13 pages, 3943 KB  
Proceeding Paper
Emergent Behavior and Computational Capabilities in Nonlinear Systems: Advancing Applications in Time Series Forecasting and Predictive Modeling
by Kárel García-Medina, Daniel Estevez-Moya, Ernesto Estevez-Rams and Reinhard B. Neder
Comput. Sci. Math. Forum 2025, 11(1), 17; https://doi.org/10.3390/cmsf2025011017 - 11 Aug 2025
Viewed by 126
Abstract
Natural dynamical systems can often display various long-term behaviours, ranging from entirely predictable decaying states to unpredictable, chaotic regimes or, more interestingly, highly correlated and intricate states featuring emergent phenomena. That, of course, imposes a level of generality on the models we use [...] Read more.
Natural dynamical systems can often display various long-term behaviours, ranging from entirely predictable decaying states to unpredictable, chaotic regimes or, more interestingly, highly correlated and intricate states featuring emergent phenomena. That, of course, imposes a level of generality on the models we use to study them. Among those models, coupled oscillators and cellular automata (CA) present a unique opportunity to advance the understanding of complex temporal behaviours because of their conceptual simplicity but very rich dynamics. In this contribution, we review the work completed by our research team over the last few years in the development and application of an alternative information-based characterization scheme to study the emergent behaviour and information handling of nonlinear systems, specifically Adler-type oscillators under different types of coupling: local phase-dependent (LAP) coupling and Kuramoto-like local (LAK) coupling. We thoroughly studied the long-term dynamics of these systems, identifying several distinct dynamic regimes, ranging from periodic to chaotic and complex. The systems were analysed qualitatively and quantitatively, drawing on entropic measures and information theory. Measures such as entropy density (Shannon entropy rate), effective complexity measure, and Lempel–Ziv complexity/information distance were employed. Our analysis revealed similar patterns and behaviours between these systems and CA, which are computationally capable systems, for some specific rules and regimes. These findings further reinforce the argument around computational capabilities in dynamical systems, as understood by information transmission, storage, and generation measures. Furthermore, the edge of chaos hypothesis (EOC) was verified in coupled oscillators systems for specific regions of parameter space, where a sudden increase in effective complexity measure was observed, indicating enhanced information processing capabilities. Given the potential for exploiting this non-anthropocentric computational power, we propose this alternative information-based characterization scheme as a general framework to identify a dynamical system’s proximity to computationally enhanced states. Furthermore, this study advances the understanding of emergent behaviour in nonlinear systems. It explores the potential for leveraging the features of dynamical systems operating at the edge of chaos by coupling them with computationally capable settings within machine learning frameworks, specifically by using them as reservoirs in Echo State Networks (ESNs) for time series forecasting and predictive modeling. This approach aims to enhance the predictive capacity, particularly that of chaotic systems, by utilising EOC systems’ complex, sensitive dynamics as the ESN reservoir. Full article
Show Figures

Figure 1

11 pages, 422 KB  
Proceeding Paper
Cascading Multi-Agent Policy Optimization for Demand Forecasting
by Saeed Varasteh Yazdi
Comput. Sci. Math. Forum 2025, 11(1), 18; https://doi.org/10.3390/cmsf2025011018 - 31 Jul 2025
Viewed by 75
Abstract
Reliable demand forecasting is crucial for effective supply chain management, where inaccurate forecasts can lead to frequent out-of-stock or overstock situations. While numerous statistical and machine learning methods have been explored for demand forecasting, reinforcement learning approaches, despite their significant potential, remain little [...] Read more.
Reliable demand forecasting is crucial for effective supply chain management, where inaccurate forecasts can lead to frequent out-of-stock or overstock situations. While numerous statistical and machine learning methods have been explored for demand forecasting, reinforcement learning approaches, despite their significant potential, remain little known in this domain. In this paper, we propose a multi-agent deep reinforcement learning solution designed to accurately predict demand across multiple stores. We present empirical evidence that demonstrates the effectiveness of our model using a real-world dataset. The results confirm the practicality of our proposed approach and highlight its potential to improve demand forecasting in retail and potentially other forecasting scenarios. Full article
Show Figures

Figure 1

8 pages, 302 KB  
Proceeding Paper
Estimating Forecast Accuracy Metrics by Learning from Time Series Characteristics
by Alina Timmermann and Ananya Pal
Comput. Sci. Math. Forum 2025, 11(1), 19; https://doi.org/10.3390/cmsf2025011019 - 19 Aug 2025
Viewed by 90
Abstract
Accurate forecasts play a crucial role in various industries, where enhancing forecast accuracy has been a major focus of research. However, for volatile data and industrial applications, ensuring the reliability and interpretability of forecast results is equally important. This study shifts the focus [...] Read more.
Accurate forecasts play a crucial role in various industries, where enhancing forecast accuracy has been a major focus of research. However, for volatile data and industrial applications, ensuring the reliability and interpretability of forecast results is equally important. This study shifts the focus from predicting future values to estimating forecast accuracy with confidence when no future validation data is present. To achieve this, we use time series characteristics calculated by statistical tests and estimate forecast accuracy metrics. For this, two methods are applied: Estimation by the euclidean distances between time series characteristic values, and second, estimation by clustering of time series characteristics. In-sample forecast accuracy serves as a benchmark method. A diverse, industrial data set is used to evaluate the methods. The results demonstrate that there is significant correlation between certain time series characteristics and estimation quality of forecast accuracy metrics. For all forecast accuracy metrics, the two proposed methods outperform the in-sample forecast estimation. These findings contribute to improving the reliability and interpretability of forecast evaluations, particularly in industrial applications with unstable data. Full article
Show Figures

Figure 1

10 pages, 4353 KB  
Proceeding Paper
Should You Sleep or Trade Bitcoin?
by Paridhi Talwar, Aman Jain and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 20; https://doi.org/10.3390/cmsf2025011020 - 22 Aug 2025
Viewed by 270
Abstract
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the [...] Read more.
Dramatic price swings and the possibility of extreme returns have made Bitcoin a hot topic of interest for investors and researchers alike. With the help of advanced neural network models including CNN, RCNN, and LSTM networks, this paper has delved deep into the intricacies of Bitcoin price behavior. We will study different time intervals—close-to-close, close-to-open, open-to-close, and day-to-day—to find a pattern that we can use to develop an investment strategy. The average volatility over a year, six months, and three months is compared with the predictive power of volatility versus a traditional buy-and-hold strategy. Our findings point out the strengths and weaknesses of each neural network model and provide useful insights into optimizing cryptocurrency portfolios. This study contributes to the literature on the price prediction and volatility analysis of cryptocurrencies, thus providing useful information to both researchers and investors to execute strategic steps within the volatile cryptocurrency market. Full article
Show Figures

Figure 1

13 pages, 925 KB  
Proceeding Paper
Forecasting Techniques for Univariate Time Series Data: Analysis and Practical Applications by Category
by Leonard Dervishi, Antonios Raptakis and Gerald Bieber
Comput. Sci. Math. Forum 2025, 11(1), 21; https://doi.org/10.3390/cmsf2025011021 - 11 Aug 2025
Viewed by 50
Abstract
Effective forecasting is vital in various domains as it supports informed decision-making and risk mitigation. This paper aims to improve the selection of appropriate forecasting methods for univariate time series. We propose a systematic categorization based on key characteristics, such as stationarity and [...] Read more.
Effective forecasting is vital in various domains as it supports informed decision-making and risk mitigation. This paper aims to improve the selection of appropriate forecasting methods for univariate time series. We propose a systematic categorization based on key characteristics, such as stationarity and seasonality and analyze well-known forecasting techniques suitable for each category. Additionally, we examine how forecasting horizons, the time periods for which forecasts are generated, affect method performance, thus addressing a significant gap in the existing literature. Our findings reveal that certain techniques excel in specific categories and demonstrate performance progression over time, indicating how they improve or decline relative to other techniques. By enhancing the understanding of method effectiveness across diverse time series characteristics, this research aims to guide professionals in making informed choices for their forecasting needs. Full article
Show Figures

Figure 1

10 pages, 3530 KB  
Proceeding Paper
Exploring Multi-Modal LLMs for Time Series Anomaly Detection
by Hao Niu, Guillaume Habault, Huy Quang Ung, Roberto Legaspi, Zhi Li, Yanan Wang, Donghuo Zeng, Julio Vizcarra and Masato Taya
Comput. Sci. Math. Forum 2025, 11(1), 22; https://doi.org/10.3390/cmsf2025011022 - 11 Aug 2025
Viewed by 46
Abstract
Anomaly detection in time series data is crucial across various domains. Traditional methods often struggle with continuously evolving time series requiring adjustment, whereas large language models (LLMs) and multi-modal LLMs (MLLMs) have emerged as promising zero-shot anomaly detectors by leveraging embedded knowledge. This [...] Read more.
Anomaly detection in time series data is crucial across various domains. Traditional methods often struggle with continuously evolving time series requiring adjustment, whereas large language models (LLMs) and multi-modal LLMs (MLLMs) have emerged as promising zero-shot anomaly detectors by leveraging embedded knowledge. This study expands recent evaluations of MLLMs for zero-shot time series anomaly detection by exploring newer models, additional input representations, varying input sizes, and conducting further analyses. Our findings reveal that while MLLMs are effective for zero-shot detection, they still face limitations, such as effectively integrating both text and vision representations or handling longer input lengths. These challenges unveil diverse opportunities for future improvements. Full article
Show Figures

Figure 1

21 pages, 2064 KB  
Proceeding Paper
Enhancing Public Health Insights and Interpretation Through AI-Driven Time-Series Analysis: Hierarchical Clustering, Hamming Distance, and Binary Tree Visualization of Infectious Disease Trends
by Ayauzhan Arystambekova and Eugene Pinsky
Comput. Sci. Math. Forum 2025, 11(1), 23; https://doi.org/10.3390/cmsf2025011023 - 11 Aug 2025
Viewed by 45
Abstract
This paper applies hierarchical clustering and Hamming Distance to analyze the temporal trends of infectious diseases across different regions of Uzbekistan. By leveraging hierarchical clustering, we effectively group regions based on disease similarity without requiring predefined cluster numbers. Hamming Distance further quantifies disease [...] Read more.
This paper applies hierarchical clustering and Hamming Distance to analyze the temporal trends of infectious diseases across different regions of Uzbekistan. By leveraging hierarchical clustering, we effectively group regions based on disease similarity without requiring predefined cluster numbers. Hamming Distance further quantifies disease trajectory similarities, helping assess epidemiological patterns over time. Binary tree visualizations enhance the interpretability of clustering results, offering a novel method for identifying regional trends. The dataset includes yearly incidence rates of seven infectious diseases from 2012 to 2019, along with population, healthcare infrastructure, and geographic attributes for each region. This approach provides an interpretable framework for public health analysis and decision-making. Full article
Show Figures

Figure 1

12 pages, 1517 KB  
Proceeding Paper
Drift and Diffusion in Geospatial Econometrics: Implications for Panel Data and Time Series
by James Ming Chen
Comput. Sci. Math. Forum 2025, 11(1), 24; https://doi.org/10.3390/cmsf2025011024 - 11 Aug 2025
Viewed by 5
Abstract
Economic data is highly dependent on its arrangement within space and time. Perhaps the most obvious and important definition of space is geospatial configuration on the Earth’s surface. Consideration of geospatial effects produces a dramatic improvement in the prediction of median housing prices [...] Read more.
Economic data is highly dependent on its arrangement within space and time. Perhaps the most obvious and important definition of space is geospatial configuration on the Earth’s surface. Consideration of geospatial effects produces a dramatic improvement in the prediction of median housing prices across 20,640 districts in California. Unconditional regression with engineered variables, two-stage least squares regression (2SLS), and iterative local regression approach r2 ≈ 0.8536, the goodness of fit attained in the original California study. Geospatial methods can be generalized to panel data analysis and time-series forecasting. Distance-sensitive analysis reveals the value of treating time-variant data as potentially discrete and discontinuous. This insight highlights the value of methodologies that suspend the assumption that data varies in a continuous or even linear fashion across space and time. Full article
Show Figures

Figure 1

10 pages, 301 KB  
Proceeding Paper
Comparative Analysis of Forecasting Models for Disability Resource Planning in Brazil’s National Textbook Program
by Luciano Cabral, Luam Santos, Jário Santos Júnior, Thyago Oliveira, Dalgoberto Pinho Júnior, Nicholas Cruz, Joana Lobo, Breno Duarte, Lenardo Silva, Rafael Silva and Bruno Pimentel
Comput. Sci. Math. Forum 2025, 11(1), 25; https://doi.org/10.3390/cmsf2025011025 - 13 Aug 2025
Viewed by 5
Abstract
The accurate forecasting of student disability trends is essential for optimizing educational accessibility and resource distribution in the context of Brazil’s oldest public policy, the National Textbook Program (PNLD). This study applies machine learning (ML) and time series forecasting models (TSF) to predict [...] Read more.
The accurate forecasting of student disability trends is essential for optimizing educational accessibility and resource distribution in the context of Brazil’s oldest public policy, the National Textbook Program (PNLD). This study applies machine learning (ML) and time series forecasting models (TSF) to predict the number of visually impaired students in Brazil using educational census data from 2021 to 2023, with the aim of estimating the amount of Braille textbooks to be acquired in the PNLD’s context. By performing a comparative analysis on various ML models (e.g, Naive Bayes, ElasticNet, gradient boosting) and TSF techniques (e.g., ARIMA and SARIMA models, as well as exponential smoothing) to predict future enrollment trends, we identify the most effective approaches for school-level and long-term disability enrollment predictions. Results show that ElasticNet and gradient boosting excel in forecasting enrollment estimations over TSF models. Despite challenges related to data inconsistencies and reporting variations, incorporating external demographic and health data could further improve predictive accuracy. This research contributes to AI-driven educational accessibility by demonstrating how predictive analytics can enhance policy decisions and ensure an equitable distribution of resources for students with disabilities. Full article
Show Figures

Figure 1

11 pages, 1904 KB  
Proceeding Paper
The Explainability of Machine Learning Algorithms for Victory Prediction in the Video Game Dota 2 
by Julio Losada-Rodríguez, Pedro A. Castillo, Antonio Mora and Pablo García-Sánchez
Comput. Sci. Math. Forum 2025, 11(1), 26; https://doi.org/10.3390/cmsf2025011026 - 18 Aug 2025
Viewed by 4
Abstract
Video games, especially competitive ones such as Dota 2, have gained great relevance both as entertainment and in e-sports, where predicting the outcome of games can offer significant strategic advantages. In this context, machine learning (ML) is presented as a useful tool [...] Read more.
Video games, especially competitive ones such as Dota 2, have gained great relevance both as entertainment and in e-sports, where predicting the outcome of games can offer significant strategic advantages. In this context, machine learning (ML) is presented as a useful tool for analysing and predicting performance in these games based on data collected before the start of the games, such as character selection information. Thus, in this work, we have developed and tested ML models, including Random Forest and Gradient Boosting, to predict the outcome of Dota 2 matches. This study is innovative in that it incorporates explainability techniques using Shapley Additive Explanations (SHAP) graphs, allowing us to understand which specific factors influence model predictions. Data extracted from the OpenDota API were preprocessed and used to train the models, evaluating them using metrics such as accuracy, precision, recall, F1-score, and cross-validated accuracy. The results indicate that predictive models, particularly Random Forest, can accurately predict game outcomes based only on pregame information, also suggesting that the explainability of machine learning techniques can be effective for analysing strategic factors in competitive video games. Full article
Show Figures

Figure 1

10 pages, 404 KB  
Proceeding Paper
Simplicity vs. Complexity in Time Series Forecasting: A Comparative Study of iTransformer Variants
by Polycarp Shizawaliyi Yakoi, Xiangfu Meng, Danladi Suleman, Adeleye Idowu, Victor Adeyi Odeh and Chunlin Yu
Comput. Sci. Math. Forum 2025, 11(1), 27; https://doi.org/10.3390/cmsf2025011027 - 22 Aug 2025
Viewed by 3
Abstract
This study re-examines the balance between architectural intricacy and generalization in Transformer models for long-term time series predictions. We perform a systematic comparison involving a lightweight baseline (iTransformer) and two enhanced versions: MiTransformer, which incorporates an external memory component for extending context, and [...] Read more.
This study re-examines the balance between architectural intricacy and generalization in Transformer models for long-term time series predictions. We perform a systematic comparison involving a lightweight baseline (iTransformer) and two enhanced versions: MiTransformer, which incorporates an external memory component for extending context, and DFiTransformer, which features dual-frequency decomposition along with Learnable Cross-Frequency Attention. All models undergo training using the same protocols across eight standard benchmarks and four forecasting periods. Findings indicate that both MiTransformer and DFiTransformer do not reliably surpass the baseline. In many instances, the increased complexity leads to greater variance and decreased accuracy, especially with unstable or inconsistent datasets. These results imply that architectural minimalism, when effectively refined, can match or surpass the effectiveness of more complex designs—challenging the prevailing trend toward increasingly intricate forecasting architectures. Full article
Show Figures

Figure 1

2730 KB  
Proceeding Paper
Analysis of Economic and Growth Synchronization Between China and the USA Using a Markov-Switching–VAR Model: A Trend and Cycle Approach
by Mariem Bouattour, Malek Abaab, Hajer Chibani, Hamdi Becha and Kamel Helali
Comput. Sci. Math. Forum 2025, 11(1), 28; https://doi.org/10.3390/cmsf2025011028 - 30 Jul 2025
Viewed by 7
Abstract
This study examines the synchronization of economic and growth cycles between China and the United States of America amid ongoing economic and geopolitical tensions. Using a Markov-Switching–Vector Autoregression (MS-VAR) model, the analysis applies the Hodrick–Prescott and Baxter–King filters to monthly data from January [...] Read more.
This study examines the synchronization of economic and growth cycles between China and the United States of America amid ongoing economic and geopolitical tensions. Using a Markov-Switching–Vector Autoregression (MS-VAR) model, the analysis applies the Hodrick–Prescott and Baxter–King filters to monthly data from January 2000 to December 2024, capturing trends and cyclical fluctuations. The findings reveal asymmetries in economic synchronization, with differences in recession and expansion durations influenced by trade disputes, financial integration, and external shocks. As the rivalry between the two nations intensifies, marked by trade wars, technological competition, and geopolitical conflicts, understanding their economic co-movement becomes crucial. This study contributes to the literature by providing empirical insights into their evolving interdependence and offers policy recommendations for mitigating asymmetric shocks and promoting global economic stability. Full article
Show Figures

Figure 1

12 pages, 1667 KB  
Proceeding Paper
Multivariate Forecasting Evaluation: Nixtla-TimeGPT
by S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 29; https://doi.org/10.3390/cmsf2025011029 (registering DOI) - 26 Aug 2025
Abstract
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special [...] Read more.
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special because they do not need to be trained beforehand, saving time and computational power. Also, multivariate forecasting with the existing models is difficult when the future horizons are unknown for the regressors because they add mode uncertainties in the forecasting process. Thus, this study experiments with TimeGPT(Zeroshot) by Nixtla where it tries to identify if the generative model can outperform other models like ARIMA, Prophet, NeuralProphet, Linear Regression, XGBoost, Random Forest, LSTM, and RNN. To determine this, the research created synthetic datasets and synthetic signals to assess the individual model performances and regressor performances for 12 models. The results then used the findings to assess the performance of TimeGPT in comparison to the best fitting models in different real-world scenarios. The results showed that TimeGPT outperforms multivariate forecasting for weekly granularities by automatically selecting important regressors whereas its performance for daily and monthly granularities is still weak. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop