Skip to Content
  • Indexed in
    Scopus
  • 21 days
    Time to First Decision

Analytics

Analytics is an international, peer-reviewed, open access journal on methodologies, technologies, and applications of analytics, published quarterly online by MDPI.

All Articles (135)

Integrating Deep Learning Nodes into an Augmented Decision Tree for Automated Medical Coding

  • Spoorthi Bhat,
  • Veda Sahaja Bandi and
  • Joshua Carberry
  • + 1 author

Accurate assignment of International Classification of Diseases (ICD) codes is essential for healthcare analytics, billing, and clinical research. However, manual coding remains time-consuming and error-prone due to the scale and complexity of the ICD taxonomy. While hierarchical deep learning approaches have improved automated coding, their deployment across large taxonomies raises scalability and efficiency concerns. To address these limitations, we introduce the Augmented Decision Tree (ADT) framework, which integrates deep learning with symbolic rule-based logic for automated medical coding. ADT employs an automated lexical screening mechanism to dynamically select the most appropriate modeling strategy for each decision node, thereby minimizing manual configuration. Nodes with high keyword distinctiveness are handled by symbolic rules, while semantically ambiguous nodes are assigned to deep contextual models fine-tuned from PubMedBERT. This selective design eliminates the need to train a deep learning model at every node, significantly reducing computational cost. A case study demonstrates that this hybrid and adaptive ADT approach supports scalable and efficient ICD coding. Experimental results show that ADT outperforms a pure decision tree baseline and achieves accuracy comparable to that of a full deep learning-based decision tree, while requiring substantially less training time and computational resources.

12 February 2026

Generating FGDPs from a doctor’s notes (adapted from [25]).

Site Selection for Solar Photovoltaic Power Plant Using MCDM Method with New De-i-Fuzzification Technique

  • Kamal Hossain Gazi,
  • Asesh Kumar Mukherjee and
  • Arijit Ghosh
  • + 3 authors

Choosing sites for solar photovoltaic (PV) power plants in developing countries like India is a crucial task while considering multiple conflicting factors and sub-factors simultaneously. Multi-criteria decision-making (MCDM) is an optimisation method that provides a framework for handling such situations in an intuitionistic fuzzy environment. The complexity and uncertainty associated with the site selection model are dealt with professionally. The Criteria Importance Through Intercriteria Correlation (CRITIC) method is applied to determine the relative importance of the criteria, identifying airflow speed as the most influential factor, followed by humidity ratio, level of dust haze, availability of labour and resources, and ecological effects. This shows that airflow speed plays an important role in the power plant’s efficiency and performance. The Vlse Kriterijumska Optimizacija I Kompromisno Rešenje (VIKOR) method is then used to prioritise the alternatives as potential locations for setting up a solar PV power plant in India. A new de-i-fuzzification method based on the relative difference between two real numbers is also proposed. Sensitivity analyses and comparative studies are conducted to assess the robustness and effectiveness of the framework. Overall, the results demonstrate that the proposed framework is useful and effective for optimising site selection for solar power plants in India.

9 February 2026

Visual depiction of a pentagonal intuitionistic fuzzy number (PIFN).

Aim: Stock price prediction remains a highly challenging task due to the complex and nonlinear nature of financial time series data. While deep learning (DL) has shown promise in capturing these nonlinear patterns, its effectiveness is often hindered by the low signal-to-noise ratio inherent in market data. This study aims to enhance the stock predictive performance and trading outcomes by integrating Singular Spectrum Analysis (SSA) with deep learning models for stock price forecasting and strategy development on the Australian Securities Exchange (ASX)50 index. Method: The proposed framework begins by applying SSA to decompose raw stock price time series into interpretable components, effectively isolating meaningful trends and eliminating noise. The denoised sequences are then used to train a suite of deep learning architectures, including Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and hybrid CNN-LSTM models. These models are evaluated based on their forecasting accuracy and the profitability of the trading strategies derived from their predictions. Results: Experimental results demonstrated that the SSA-DL framework significantly improved the prediction accuracy and trading performance compared to baseline DL models trained on raw data. The best-performing model, SSA-CNN-LSTM, achieved a Sharpe Ratio of 1.88 and a return on investment (ROI) of 67%, indicating robust risk-adjusted returns and effective exploitation of the underlying market conditions. Conclusions: The integration of Singular Spectrum Analysis with deep learning offers a powerful approach to stock price prediction in noisy financial environments. By denoising input data prior to model training, the SSA-DL framework enhanced signal clarity, improved forecast reliability, and enabled the construction of profitable trading strategies. These findings suggested a strong potential for SSA-based preprocessing in financial time series modeling.

27 January 2026

LSTM structure.

Large language models (LLMs) and other foundation models are rapidly being woven into enterprise analytics workflows, where they assist with data exploration, forecasting, decision support, and automation. These systems can feel like powerful new teammates: creative, scalable, and tireless. Yet they also introduce distinctive risks related to opacity, brittleness, bias, and misalignment with organizational goals. Existing work on AI ethics, alignment, and governance provides valuable principles and technical safeguards, but enterprises still lack practical frameworks that connect these ideas to the specific metrics, controls, and workflows by which analytics teams design, deploy, and monitor LLM-powered systems. This paper proposes a conceptual governance framework for enterprise AI and analytics that is explicitly centered on LLMs embedded in analytics pipelines. The framework adopts a three-layered perspective—model and data alignment, system and workflow alignment, and ecosystem and governance alignment—that links technical properties of models to enterprise analytics practices, performance indicators, and oversight mechanisms. In practical terms, the framework shows how model and workflow choices translate into concrete metrics and inform real deployment, monitoring, and scaling decisions for LLM-powered analytics. We also illustrate how this framework can guide the design of controls for metrics, monitoring, human-in-the-loop structures, and incident response in LLM-driven analytics. The paper concludes with implications for analytics leaders and governance teams seeking to operationalize responsible, scalable use of LLMs in enterprise settings.

11 January 2026

Three-Layer Governance Framework for LLM-Powered Enterprise Analytics.

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Analytics - ISSN 2813-2203