Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,208)

Search Parameters:
Keywords = financial prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1188 KB  
Article
RW-UCFI: A Risk-Weighted Uncertainty-Conditioned Explainability Framework for Stacked Ensemble Models in B2B Financial Risk Profiling
by Carolus Borromeus Widiyatmoko, Rahmat Gernowo and Budi Warsito
Information 2026, 17(4), 363; https://doi.org/10.3390/info17040363 - 10 Apr 2026
Abstract
Interpretability in corporate financial risk profiling must support not only predictive performance but also governance-oriented decision-making. This study proposes a three-class financial risk assessment workflow for B2B settings and introduces Risk-Weighted Uncertainty-Conditioned Feature Importance (RW-UCFI) as a post-explanation prioritization framework. RW-UCFI is not [...] Read more.
Interpretability in corporate financial risk profiling must support not only predictive performance but also governance-oriented decision-making. This study proposes a three-class financial risk assessment workflow for B2B settings and introduces Risk-Weighted Uncertainty-Conditioned Feature Importance (RW-UCFI) as a post-explanation prioritization framework. RW-UCFI is not a new attribution method; rather, it reorganizes existing explanation outputs according to class sensitivity, predictive uncertainty, and asymmetric risk relevance. The empirical analysis uses a single cross-sectional dataset of 954 Indonesia Stock Exchange-listed firms with organizationally provided Low Risk, Medium Risk, and High Risk labels. A stacked ensemble model is used as the explanatory substrate, followed by calibration analysis, uncertainty analysis, and governance-oriented explainability aggregation. On the held-out validation set, the model achieved an accuracy of 0.7487 and a macro ROC-AUC of 0.8630. Repeated stratified validation indicated moderately stable aggregate performance, although class-level reliability remained uneven, with High Risk recall emerging as the weakest and most variable component. The original model showed the most favorable probability reliability among the evaluated variants, whereas temperature scaling and one-vs-rest isotonic regression did not improve calibration. Uncertainty analysis further showed that the most uncertain cases concentrated substantially more misclassifications and High Risk misses; the top 30% most uncertain cases captured 52.1% of all errors and 43.8% of High Risk misses. RW-UCFI produced a materially different feature-priority structure from standard global SHAP ranking, suggesting that explanation outputs may become more decision-relevant for governance-oriented review when contextualized by uncertainty and asymmetric risk conditions in the present setting. Full article
(This article belongs to the Special Issue Data-Driven Decision-Making in Intelligent Systems)
35 pages, 856 KB  
Article
Stock Forecasting Based on Informational Complexity Representation: A Framework of Wavelet Entropy, Multiscale Entropy, and Dual-Branch Network
by Guisheng Tian, Chengjun Xu and Yiwen Yang
Entropy 2026, 28(4), 424; https://doi.org/10.3390/e28040424 - 10 Apr 2026
Abstract
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as [...] Read more.
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as well as difficulties in balancing prediction accuracy with model complexity. To address these challenges, we propose Wavelet Entropy and Cross-Attention Network (WECA-Net), which combines wavelet decomposition with a multimodal cross-attention mechanism. From an information-theoretic perspective, stock price dynamics reflect the time-varying uncertainty and informational complexity of the market. We employ wavelet entropy to quantify the dispersion and uncertainty of energy distribution across frequency bands, and multiscale entropy to measure the scale-dependent complexity and regularity of the time series. These entropy-derived descriptors provide an interpretable prior of “information content” for cross-modal attention fusion, thereby improving robustness and generalization under non-stationary market conditions. Experiments on Chinese stock indices, A-Share, and CSI 300 component stock datasets demonstrate that WECA-Net consistently outperforms mainstream models in Mean Absolute Error (MAE) and R2 across all datasets. Notably, on the CSI 300 dataset, WECA-Net achieves an R2 of 0.9895, underscoring its strong predictive accuracy and practical applicability. This framework is also well aligned with sensor data fusion and intelligent perception paradigms, offering a robust solution for financial signal processing and real-time market state awareness. Full article
(This article belongs to the Section Complexity)
24 pages, 1675 KB  
Article
A Comparative Analysis of Green and Brown Stocks: The Impact of Uncertainty Indices on Tail-Risk Forecasting
by Antonio Naimoli and Giuseppe Storti
Forecasting 2026, 8(2), 31; https://doi.org/10.3390/forecast8020031 - 10 Apr 2026
Abstract
This paper examines whether climate, geopolitical and economic policy uncertainty indices improve Value-at-Risk (VaR) and Expected Shortfall (ES) forecasts for green and brown stocks. We extend the Realized-ES-CAViaR framework by incorporating physical and transition climate risk, geopolitical risk and economic policy uncertainty indices [...] Read more.
This paper examines whether climate, geopolitical and economic policy uncertainty indices improve Value-at-Risk (VaR) and Expected Shortfall (ES) forecasts for green and brown stocks. We extend the Realized-ES-CAViaR framework by incorporating physical and transition climate risk, geopolitical risk and economic policy uncertainty indices alongside a high-low range volatility estimator. Using daily data for the iShares Global Clean Energy ETF (ICLN) and the iShares Global Energy ETF (IXC) over the period January 2012–December 2024, we evaluate alternative model specifications at the 1% and 2.5% risk levels through backtesting procedures, strictly consistent scoring rules and the Model Confidence Set methodology. Results reveal a pronounced asymmetry in the predictive content of risk indices across asset classes and quantile levels. Transition climate risk dominates tail-risk forecasting at the 1% level for both asset classes, while geopolitical risk and economic policy uncertainty emerge as the leading factors at the 2.5% level for green and brown stocks, respectively. These findings highlight the heterogeneous channels through which uncertainty shocks propagate into financial tail-risk, with direct implications for risk management and regulatory oversight during the low-carbon transition. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

27 pages, 6134 KB  
Article
SHAP-Based Insights into Environmental and Economic Performance of a Shower Heat Exchanger Under Unbalanced Flow Conditions: A Feasibility Study
by Sabina Kordana-Obuch and Mariusz Starzec
Energies 2026, 19(8), 1845; https://doi.org/10.3390/en19081845 - 9 Apr 2026
Abstract
Heat recovery from greywater is one solution for improving the energy efficiency of buildings and reducing greenhouse gas emissions. Particular attention is paid to systems utilizing heat from shower water, which, due to its high temperature and regularity, represents a promising energy source. [...] Read more.
Heat recovery from greywater is one solution for improving the energy efficiency of buildings and reducing greenhouse gas emissions. Particular attention is paid to systems utilizing heat from shower water, which, due to its high temperature and regularity, represents a promising energy source. However, the interplay of parameters determining the financial and environmental effectiveness of such a solution has not yet been fully explored. Therefore, the aim of this paper was to identify key variables influencing the feasibility of using a shower heat exchanger operating under unbalanced flow conditions and to assess the consistency between financial and environmental effects. The analyzed net present values ranged from −€1381 to €52,168. Greenhouse gas emission reduction values ranged between 61 kgCO2e and 37,207 kgCO2e. The analysis was conducted using predictive modeling and the SHAP (SHapley Additive exPlanations) method, which allows for the interpretation of the impact of individual variables on the forecasted net present value and potential greenhouse gas emission reduction. A global analysis was carried out to determine the relative importance of variables, as well as a local analysis for selected cases. The results showed that operational variables related to shower use, particularly shower length and mixed water flow rate, significantly influenced the prediction results of both models. In the case of emission reduction, greenhouse gas emission intensity and its change over time also had a significant impact, whilst the financial effects were determined by the energy price from the perspective of the subsequent years of the system’s operation. Full article
Show Figures

Figure 1

23 pages, 2118 KB  
Article
IDBspRS: An Interior Design-Built Service Package Recommendation System Using Artificial Intelligence
by Pranabanti Karmaakar, Muhammad Aslam Jarwar, Junaid Abdul Wahid and Najam Ul Hasan
Sustainability 2026, 18(7), 3605; https://doi.org/10.3390/su18073605 - 7 Apr 2026
Viewed by 136
Abstract
Digital transformation in the interior design industry has opened new opportunities for innovation; however, many cost-conscious homeowners still face difficulties in selecting and customizing design packages that achieve a balance between overall cost and sustainable quality. Existing interior design platforms lack seamless support [...] Read more.
Digital transformation in the interior design industry has opened new opportunities for innovation; however, many cost-conscious homeowners still face difficulties in selecting and customizing design packages that achieve a balance between overall cost and sustainable quality. Existing interior design platforms lack seamless support and often require homeowners to invest considerable time and effort to tailor services to their needs while staying within budget. To address these challenges, this paper explores the use of machine learning to build a predictive modelling framework that supports personalized and value-driven interior design recommendations. The proposed approach uses a hybrid recommendation system that combines content-based and collaborative filtering. It also incorporates lightweight techniques such as TF–IDF (Term Frequency–Inverse Document Frequency) and logistic regression to more effectively capture user preferences, budget limits, and several interior-design service categories. Primary data was collected from small to medium-sized interior design companies. To demonstrate the proposed approach, a user-friendly web application tool is developed to integrate machine learning-enabled recommendation services. The resulting solution provides access to professional interior design services, enhancing customization and customer satisfaction while reducing the time and effort required from homeowners. To validate and compare the performance of the proposed approach, several machine learning models including Random Forest, XGBoost and KNN (K-Nearest Neighbors) were tested using standard metrics such as accuracy, precision, recall, and ROC-AUC (Receiver Operating Characteristic-Area Under the Curve). The proposed logistic regression hybrid model achieved the strongest overall results, with an accuracy of 83.62%. These findings demonstrate the significant contribution of this work to enhancing personalization and accessibility in the interior design sector via machine learning-enabled recommendation systems. The proposed approach bridges the gap between expert-level services and financial limits, making it a practical choice for cost-conscious homeowners. Full article
(This article belongs to the Special Issue AI and ML Applications for a Sustainable Future)
Show Figures

Figure 1

32 pages, 507 KB  
Article
Rookie Independent Directors and Corporate Policies: Evidence from China
by Waqas Bin Khidmat, Sook Fern Yeo and Cheng Ling Tan
J. Risk Financial Manag. 2026, 19(4), 265; https://doi.org/10.3390/jrfm19040265 - 7 Apr 2026
Viewed by 238
Abstract
In this study, we investigate how corporate policies are influenced by the presence of rookie independent directors (RIDs). We hypothesize that RIDs, due to their inexperience, impact corporate policies in ways that may amplify agency problems. Specifically, firms with RIDs demonstrate higher investment [...] Read more.
In this study, we investigate how corporate policies are influenced by the presence of rookie independent directors (RIDs). We hypothesize that RIDs, due to their inexperience, impact corporate policies in ways that may amplify agency problems. Specifically, firms with RIDs demonstrate higher investment in R&D and capital expenditure, increased leverage (both short- and long-term), enhanced liquidity (cash holdings and working capital), and elevated risk-taking, while their presence leads to a conservative payout policy. Using a sample of Chinese-listed firms from 2008 to 2022, our findings confirm these predictions. Additional analyses reveal that RIDs’ effects are more pronounced in high-CEO-power environments, where their limited governance capabilities may align with managerial interests, exacerbating financial risks. This study contributes to the corporate governance literature by integrating upper echelon and agency theories, shedding light on the dual-edged role of RIDs in shaping corporate outcomes. Full article
(This article belongs to the Section Business and Entrepreneurship)
37 pages, 1919 KB  
Article
LLMs for Integrated Business Intelligence: A Big Data-Driven Framework Integrating Marketing Optimization, Financial Performance, and Audit Quality
by Leonidas Theodorakopoulos, Aristeidis Karras, Alexandra Theodoropoulou and Christos Klavdianos
Big Data Cogn. Comput. 2026, 10(4), 110; https://doi.org/10.3390/bdcc10040110 - 5 Apr 2026
Viewed by 224
Abstract
Enterprise decision making in marketing, finance, and audit remains fragmented, leading to inefficient budget allocation and incomplete risk assessment. This study proposes an integrated, Big Data-driven decision-support framework that unifies Large Language Models (LLMs), attention-based marketing mix modeling, and multi-agent, game-theoretic optimization to [...] Read more.
Enterprise decision making in marketing, finance, and audit remains fragmented, leading to inefficient budget allocation and incomplete risk assessment. This study proposes an integrated, Big Data-driven decision-support framework that unifies Large Language Models (LLMs), attention-based marketing mix modeling, and multi-agent, game-theoretic optimization to coordinate cross-functional decisions. The architecture combines five modules: LLM-enhanced customer segmentation and customer lifetime value prediction, attention-weighted marketing mix modeling, multi-agent LLM systems for hierarchical budget optimization, attention-informed Markov multi-touch attribution, and LLM-augmented audit quality assessment. Empirical validation on a large-scale e-commerce dataset with 2.8 million customers and USD 156 million in marketing expenditure shows that marketing return on investment increases from 4.2 to 6.78 (61.4% relative improvement), financial forecasting error (MAPE) decreases from 12.8% to 4.7% (63.3% reduction), fraud detection accuracy improves by 29.8%, the Audit Quality Index reaches 0.951, and customer lifetime value prediction accuracy improves from 76.4% to 91.3%. By operationalizing the convergence of LLMs, attention mechanisms, and game-theoretic reasoning within a unified and empirically validated framework, the study delivers both theoretical advances and practically deployable tools for integrated business intelligence in digital economies. Full article
(This article belongs to the Section Large Language Models and Embodied Intelligence)
Show Figures

Figure 1

29 pages, 2990 KB  
Article
Federated and Interpretable AI Framework for Secure and Transparent Loan Default Prediction in Financial Institutions
by Awad M. Awadelkarim
Math. Comput. Appl. 2026, 31(2), 56; https://doi.org/10.3390/mca31020056 - 5 Apr 2026
Viewed by 278
Abstract
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, [...] Read more.
Predicting loan defaults is a significant challenge for financial institutions; however, current machine learning techniques often encounter issues in areas such as data privacy, cross-institutional cooperation, and model transparency. The restrictions on the practical implementation of advanced predictive models are centralized training paradigms, which limit the application of advanced models because of regulatory and confidentiality issues, and black-box decision making, which diminishes confidence in automated credit risk tools. This study mitigates these problems by adopting a federated-inspired decentralized ensemble learning model combined with explainable artificial intelligence (XAI) in predicting loan defaults. Various machine learning classifiers are trained on partitioned institutional data without the need to share any data; they include K-Nearest Neighbors, support vector machine, random forest, and XGBoost. They use a prediction-level aggregation strategy to simulate the collaborative decision-making process without losing locality of data. SHAP and LIME are used to promote model interpretability by giving both global and local explanations of the consequences of prediction. The proposed framework was tested on a large public dataset of loans that contains more than 116,000 records, including various financial and borrower-related features. The experimental findings show that XGBoost has high and reliable predictive accuracy in both centralized and decentralized scenarios, achieving 99.7% accuracy under federated-inspired evaluation. The explanation analysis shows interest rate spread and upfront charges as the most significant predictors of loan default risk. The main contributions of this research are as follows: (i) a privacy-preserving decentralized ensemble learning framework that is applicable in multi-institutional financial contexts, (ii) a detailed analysis of centralized and decentralized predictive performances, and (iii) the pipeline of the XAI, which can be used to increase its transparency and regulatory confidence in automated credit risk evaluation. These results prove that decentralized learning combined with explainable AI can provide high-performing, transparent and privacy-sensitive loan default prediction systems in practice in real-world banking systems. Full article
Show Figures

Figure 1

17 pages, 278 KB  
Data Descriptor
A Survey Dataset on Student Retention in Higher Education: A Colombian Public University Case
by Erika María López-López, Osnamir Elias Bru-Cordero and Cristian David Correa Álvarez
Data 2026, 11(4), 75; https://doi.org/10.3390/data11040075 - 3 Apr 2026
Viewed by 219
Abstract
Student attrition remains a persistent challenge in higher education and is shaped by interacting socioeconomic, academic, institutional, and wellbeing-related mechanisms. Although learning analytics and educational data mining increasingly support early-warning and intervention workflows, dataset reuse is often limited by incomplete documentation and inconsistent [...] Read more.
Student attrition remains a persistent challenge in higher education and is shaped by interacting socioeconomic, academic, institutional, and wellbeing-related mechanisms. Although learning analytics and educational data mining increasingly support early-warning and intervention workflows, dataset reuse is often limited by incomplete documentation and inconsistent variable definitions. This Data Descriptor presents a structured cross-sectional survey dataset on factors influencing student persistence at a Colombian public university campus (La Paz). Data were collected between August and December 2025 through an online questionnaire and subsequently cleaned to remove duplicate entries and personally identifiable information. The released dataset contains 333 student records and 33 variables covering demographics (e.g., age, gender, first-generation status), socioeconomic conditions (e.g., residential stratum, housing, financial aid), academic experience and satisfaction (multiple 1–5 Likert items), perceived dropout intention across personal/socioeconomic/academic domains, thematically coded open-ended items describing challenges and motives, and a self-allocation of 0–100 weights across three dropout-factor domains. We provide a machine-readable codebook, a transparent preprocessing description, and technical validation checks (value ranges, category consistency, and composite-score integrity). The dataset is intended to support reproducible retention research, equity-oriented analyses, and benchmarking of predictive models, while encouraging responsible reuse through privacy-preserving release practices and FAIR-aligned metadata, repository deposition, and versioning. Full article
23 pages, 399 KB  
Article
Integrating Model Explainability and Uncertainty Quantification for Trustworthy Fraud Detection
by Tebogo Forster Mapaila and Makhamisa Senekane
Technologies 2026, 14(4), 212; https://doi.org/10.3390/technologies14040212 - 3 Apr 2026
Viewed by 271
Abstract
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments [...] Read more.
Financial fraud and money laundering continue to challenge financial stability and regulatory oversight, motivating the widespread adoption of machine learning models for transaction monitoring. Although ensemble models such as Random Forest and XGBoost achieve strong predictive performance, their deployment in high-stakes financial environments is constrained by limited interpretability, overconfident predictions, and the absence of principled mechanisms for expressing decision uncertainty. Emerging regulatory expectations increasingly emphasise transparency, accountability, and operational reliability, underscoring the need for evaluation frameworks that extend beyond predictive accuracy. This study proposes the Integrated Transparency and Confidence Framework (ITCF), a deployment-oriented approach that unifies model explainability, statistically valid uncertainty quantification, and operational decision support for fraud detection. ITCF combines instance-level explanations generated via Local Interpretable Model-Agnostic Explanations (LIME) with distribution-free uncertainty estimation using split conformal prediction. The framework incorporates selective explainability, abstention-based routing, and uncertainty-driven triage to support human-in-the-loop review. Using the PaySim dataset of 6,362,620 mobile-money transactions, Random Forest and XGBoost models are evaluated under extreme class imbalance using F1-score, AUC–ROC, and Matthews Correlation Coefficient (MCC). At a target coverage level of 90% (α=0.1), both models achieve empirical coverage close to the target level, with XGBoost producing smaller prediction sets and superior recall, MCC, and latency. ITCF provides transaction-level explanations for uncertain cases and specifies an auditable workflow that is intended to support transparency, traceability, and risk-aware human review, thereby enabling defensible human decision-making in regulated environments. Overall, this study illustrates how explainability and uncertainty quantification can be combined in a deployment-oriented evaluation workflow while noting that real-world validation remains a future endeavour. Full article
(This article belongs to the Special Issue Privacy-Preserving and Trustworthy AI for Industrial 4.0 and Beyond)
Show Figures

Graphical abstract

25 pages, 728 KB  
Review
Augmented Finance for Climate Action: A Systematic Review of AI, IoT, and Blockchain Applications in Sustainable Finance
by Nadia Mansour
Int. J. Financial Stud. 2026, 14(4), 91; https://doi.org/10.3390/ijfs14040091 - 3 Apr 2026
Viewed by 350
Abstract
Through assessing the roles of artificial intelligence (AI), Internet of Things (IoT), and blockchain in augmented finance, a critical synthesis of the literature for addressing the complex financial challenges that accompany climate change is provided. This systematic review synthesizes the existing literature to [...] Read more.
Through assessing the roles of artificial intelligence (AI), Internet of Things (IoT), and blockchain in augmented finance, a critical synthesis of the literature for addressing the complex financial challenges that accompany climate change is provided. This systematic review synthesizes the existing literature to identify how these technologies may help in the context of sustainable finance. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we reviewed and analyzed 42 peer-reviewed studies published between 2018 and 2025. Our results are applicable in three general areas: (1) increased measurement, reporting, and verification (MRV) of environmental impacts through employing IoT and blockchain to ensure transparency and traceability; (2) better physical and transition risk control using predictive AI modeling; and (3) better environmental, social, and governance (ESG) analysis and detection of greenwashing and risk reduction via alternative data. We highlight the power of these technologies to address problems such as information asymmetry and transparency gaps in impact chains. However, significant challenges such as algorithmic bias, difficulties associated with data governance, and regulatory delays persist. This study addresses this critical gap by synthesizing the evidence into a cohesive overview of the augmented finance landscape, identifying key challenges and priorities for future research. It also proposes a future research agenda with emphasis on impact assessment, algorithmic transparency, and impact on financial stability. Full article
Show Figures

Figure 1

27 pages, 1956 KB  
Article
A Data-Driven Procedure for Cost and Risk Control in Construction Investments: Quantifying Budget Gaps via Expert Scoring and Probabilistic Simulation—Evidence from a Heritage Hotel Project
by Silvia Dotres-Zúñiga, Libys Martha Zúñiga-Igarza, Alexander Sánchez-Rodríguez, Gelmar García-Vidal, Rodobaldo Martínez-Vivar and Reyner Pérez-Campdesuñer
Buildings 2026, 16(7), 1410; https://doi.org/10.3390/buildings16071410 - 2 Apr 2026
Viewed by 241
Abstract
Risk management is critical to maintain consistency between estimated and actual costs in construction investment projects, especially those that incorporate tourism and heritage components. This study aims to quantify the impact of risk factors on construction investment costs and to estimate an updated [...] Read more.
Risk management is critical to maintain consistency between estimated and actual costs in construction investment projects, especially those that incorporate tourism and heritage components. This study aims to quantify the impact of risk factors on construction investment costs and to estimate an updated maximum project budget at a defined confidence level using an integrated expert-based and probabilistic approach. The approach combines a Frequency–Impact matrix, weighted scaling, and PERT/Monte Carlo simulation, thereby transforming expert judgments into comparable numerical parameters suitable for predictive modeling. The methodology is applied to the rehabilitation of the Esmeralda Hotel project in Cuba, a heritage asset characterized by high cultural value and technical complexity. The results quantify the effects of prioritized risk factors, compute their impact coefficients, and re-estimate the project’s upper budget limit at a 95% confidence level. The findings show that risk drivers associated with higher-complexity construction processes concentrate the main vulnerabilities and explain most of the increase in total cost. In addition, the analysis indicates that contingency margins established by regulation are insufficient to absorb the project’s observed variability. The proposed model supports proactive budget control by anticipating cost deviations, improving resource allocation, and strengthening decision-making under high uncertainty. Its flexible structure enables adaptation to different project types and serves as a practical decision-support tool for investors, designers, and project managers seeking greater financial accuracy and reduced risk of cost overruns. Full article
Show Figures

Figure 1

30 pages, 958 KB  
Article
Toward Sustainability Through Carbon Neutrality: Predicting Innovation Quality in New Energy Firms from the Business Environment
by Shan Liu and Xiaozhen Wang
Sustainability 2026, 18(7), 3459; https://doi.org/10.3390/su18073459 - 2 Apr 2026
Viewed by 138
Abstract
Achieving sustainable development and carbon neutrality requires continuous technological upgrading in the new energy sector. Improvement of innovation quality in new energy firms therefore plays a significant role in sustainability transitions. However, whether and how the business environment supports the innovation quality in [...] Read more.
Achieving sustainable development and carbon neutrality requires continuous technological upgrading in the new energy sector. Improvement of innovation quality in new energy firms therefore plays a significant role in sustainability transitions. However, whether and how the business environment supports the innovation quality in the new energy sector remains unclear. Using machine learning, our study assesses the predictive ability of the business environment for innovation quality in new energy firms, distinguishes the importance of different elements, and then portrays predictive patterns of critical elements. The results show that the business environment provides substantial predictive ability for innovation quality, increasing out-of-sample R2 from 0.6200 to 0.7001, which represents an improvement of 0.0801. Among the focal explanatory variables, human resources, financial environment, and public services emerge as relatively important elements. Furthermore, we find that human resources and innovation quality exhibit an overall upward trend, whereas public services and financial environment have a complex relationship with innovation quality. Heterogeneity analysis reveals that the predictive ability of the business environment for innovation quality varies significantly across firms with different ownership and locations. Our study provides evidence for policy design and business environment optimization to strengthen the institutional foundations of sustainable development. Full article
Show Figures

Figure 1

16 pages, 1311 KB  
Article
When Better Prediction Reduces Overlap: The Predictability Paradox in Propensity Score Matching with Machine Learning
by Foong Soon Cheong
Econometrics 2026, 14(2), 19; https://doi.org/10.3390/econometrics14020019 - 1 Apr 2026
Viewed by 313
Abstract
Evidence from observational studies plays a central role in shaping public policy in health, education, and financial regulation, where randomized experiments are rarely feasible. Propensity score matching (PSM) is a widely used method to approximate fair comparisons between treatment and control groups. Incorporating [...] Read more.
Evidence from observational studies plays a central role in shaping public policy in health, education, and financial regulation, where randomized experiments are rarely feasible. Propensity score matching (PSM) is a widely used method to approximate fair comparisons between treatment and control groups. Incorporating machine learning into the estimation of propensity scores can strengthen prediction and enhance the credibility of findings. However, stronger predictive models create a “predictability paradox”. As predictive accuracy improves, estimated propensity scores for treated and control units become more distinct when treatment assignment is strongly predictable from observed covariates, revealing limited overlap between groups. In the limit, near-perfect prediction produces near-complete separation between groups, rendering traditional matching infeasible and confining inference to a narrow subset of units near the boundary of the propensity score distribution, a setting analogous to a regression discontinuity design (RDD). Researchers thus face perverse incentives to use weaker models for statistically significant but spurious results. These dynamics jeopardize the reliability of evidence for policy. To safeguard decision-making, we propose a simple reform: require that studies using PSM disclose model error rates, including false positive and false negative rates, along with information on overlap and effective sample size. Full article
Show Figures

Figure 1

25 pages, 3938 KB  
Article
Hybrid Deep Learning Techniques Integrated with Machine Learning for Foreign Exchange Rate Forecasting
by Yu Cui and Jingjing Jiang
Electronics 2026, 15(7), 1463; https://doi.org/10.3390/electronics15071463 - 1 Apr 2026
Viewed by 272
Abstract
Foreign exchange is a significant financial market that attracts investors and countries seeking profitable investments. Despite the numerous techniques available for exchange rate forecasting and trend analysis, there is still a need for an automated, intelligent model to understand patterns and predict future [...] Read more.
Foreign exchange is a significant financial market that attracts investors and countries seeking profitable investments. Despite the numerous techniques available for exchange rate forecasting and trend analysis, there is still a need for an automated, intelligent model to understand patterns and predict future trends. The creation of such prediction models can provide assistance for investors, financial institutions, and policymakers in governments. To overcome these issues, the proposed study has developed a novel hybrid deep learning model that encompasses a Bidirectional Long Short-Term Memory, an additive attention approach, and a random forest regressor (for long-horizon historical data), attempting to provide a prediction model for the following year’s official exchange rates (LCU per USD). The random forest regressor models the nonlinear interaction of features and assists with generalization, the attention layer focuses on the most influential time steps, and the Bidirectional Long Short-Term Memory (Bi-LSTM) captures all historical data for exchange rate series and temporal dependencies (or dependencies of a sequence of historical data). The use of a time partition (1960–2018 training data + 2019–2023 validation data + 2024 testing data) to train and evaluate the model provides realistic forecasting and prevents temporal leakage. The global panel dataset for more than 250 and 60+ year countries and regions demonstrate that all of the proposed models are better than all classical machine learning models, stand-alone deep learning models, and naive persistence models. The hybrid model shows the most significant prediction error reduction with R2 as 0.98, proving long-horizon currency forecasting is extremely robust. Full article
Show Figures

Figure 1

Back to TopTop