Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,841)

Search Parameters:
Keywords = Q-learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 3232 KB  
Article
Profit-Oriented Multi-Objective Dynamic Flexible Job Shop Scheduling with Multi-Agent Framework Under Uncertain Production Orders
by Qingyao Ma, Yao Lu and Huawei Chen
Machines 2025, 13(10), 932; https://doi.org/10.3390/machines13100932 (registering DOI) - 9 Oct 2025
Abstract
In the highly competitive manufacturing environment, customers are increasingly demanding punctual, flexible, and customized deliveries, compelling enterprises to balance profit, energy efficiency, and production performance while seeking new scheduling methods to enhance dynamic responsiveness. Although deep reinforcement learning (DRL) has made progress in [...] Read more.
In the highly competitive manufacturing environment, customers are increasingly demanding punctual, flexible, and customized deliveries, compelling enterprises to balance profit, energy efficiency, and production performance while seeking new scheduling methods to enhance dynamic responsiveness. Although deep reinforcement learning (DRL) has made progress in dynamic flexible job shop scheduling, existing research has rarely addressed profit-oriented optimization. To tackle this challenge, this paper proposes a novel multi-objective dynamic flexible job shop scheduling (MODFJSP) model that aims to maximize net profit and minimize makespan on the basis of traditional FJSP. The model incorporates uncertainties such as new job insertions, fluctuating due dates, and high-profit urgent jobs, and establishes a multi-agent collaborative framework consisting of “job selection–machine assignment.” For the two types of agents, this paper proposes adaptive state representations, reward functions, and variable action spaces to achieve the dual optimization objectives. The experimental results show that the double deep Q-network (DDQN), within the multi-agent cooperative framework, outperforms PPO, DQN, and classical scheduling rules in terms of solution quality and robustness. It achieves superior performance on multiple metrics such as IGD, HV, and SC, and generates bi-objective Pareto frontiers that are closer to the ideal point. The results demonstrate the effectiveness and practical value of the proposed collaborative framework for solving MODFJSP. Full article
(This article belongs to the Section Industrial Systems)
12 pages, 1463 KB  
Article
Retrieval-Augmented Vision–Language Agents for Child-Centered Encyclopedia Learning
by Jing Du, Wenhao Liu, Jingyi Ye, Dibin Zhou and Fuchang Liu
Appl. Sci. 2025, 15(19), 10821; https://doi.org/10.3390/app151910821 - 9 Oct 2025
Abstract
This study introduces an Encyclopedic Agent for children’s learning that integrates multimodal retrieval with retrieval-augmented generation (RAG). To support this framework, we construct a dataset of 9524 Wikipedia pages covering 935 encyclopedia topics, each converted into images with associated topical queries and explanations. [...] Read more.
This study introduces an Encyclopedic Agent for children’s learning that integrates multimodal retrieval with retrieval-augmented generation (RAG). To support this framework, we construct a dataset of 9524 Wikipedia pages covering 935 encyclopedia topics, each converted into images with associated topical queries and explanations. Based on this dataset, we fine-tune SigLIP, a vision–language retrieval model, using LoRA adaptation on 8484 training pairs, with 1040 reserved for testing. Experimental results show that the fine-tuned SigLIP significantly outperforms baseline models such as ColPali in both accuracy and latency, enabling efficient and precise document-image retrieval. Combined with GPT-5 for response generation, the Encyclopedic Agent delivers illustrated, interactive Q&A that is more accessible and engaging for children compared to traditional text-only methods. These findings highlight the feasibility of applying multimodal retrieval and RAG to educational agents, offering new possibilities for personalized, child-centered learning in domains such as science, history, and the arts. Full article
(This article belongs to the Special Issue Applications of Digital Technology and AI in Educational Settings)
Show Figures

Figure 1

27 pages, 2189 KB  
Article
Miss-Triggered Content Cache Replacement Under Partial Observability: Transformer-Decoder Q-Learning
by Hakho Kim, Teh-Jen Sun and Eui-Nam Huh
Mathematics 2025, 13(19), 3217; https://doi.org/10.3390/math13193217 - 7 Oct 2025
Viewed by 40
Abstract
Content delivery networks (CDNs) face steadily rising, uneven demand, straining heuristic cache replacement. Reinforcement learning (RL) is promising, but most work assumes a fully observable Markov Decision Process (MDP), unrealistic under delayed, partial, and noisy signals. We model cache replacement as a Partially [...] Read more.
Content delivery networks (CDNs) face steadily rising, uneven demand, straining heuristic cache replacement. Reinforcement learning (RL) is promising, but most work assumes a fully observable Markov Decision Process (MDP), unrealistic under delayed, partial, and noisy signals. We model cache replacement as a Partially Observable MDP (POMDP) and present the Miss-Triggered Cache Transformer (MTCT), a Transformer-decoder Q-learning agent that encodes recent histories with self-attention. MTCT invokes its policy only on cache misses to align compute with informative events and uses a delayed-hit reward to propagate information from hits. A compact, rank-based action set (12 actions by default) captures popularity–recency trade-offs with complexity independent of cache capacity. We evaluate MTCT on a real trace (MovieLens) and two synthetic workloads (Mandelbrot–Zipf, Pareto) against Adaptive Replacement Cache (ARC), Windowed TinyLFU (W-TinyLFU), classical heuristics, and Double Deep Q-Network (DDQN). MTCT achieves the best or statistically comparable cache-hit rates on most cache sizes; e.g., on MovieLens at M=600, it reaches 0.4703 (DDQN 0.4436, ARC 0.4513). Miss-triggered inference also lowers mean wall-clock time per episode; Transformer inference is well suited to modern hardware acceleration. Ablations support CL=50 and show that finer action grids improve stability and final accuracy. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

25 pages, 2551 KB  
Article
Deep-Reinforcement-Learning-Based Sliding Mode Control for Optimized Energy Management in DC Microgrids
by Monia Charfeddine, Mongi Ben Moussa and Khalil Jouili
Mathematics 2025, 13(19), 3212; https://doi.org/10.3390/math13193212 - 7 Oct 2025
Viewed by 79
Abstract
A hybrid control architecture is proposed for enhancing the stability and energy management of DC microgrids (DCMGs) integrating photovoltaic generation, batteries, and supercapacitors. The approach combines nonlinear Sliding Mode Control (SMC) for fast and robust DC bus voltage regulation with a Deep Q-Learning [...] Read more.
A hybrid control architecture is proposed for enhancing the stability and energy management of DC microgrids (DCMGs) integrating photovoltaic generation, batteries, and supercapacitors. The approach combines nonlinear Sliding Mode Control (SMC) for fast and robust DC bus voltage regulation with a Deep Q-Learning (DQL) agent that learns optimal high-level policies for charging, discharging, and load management. This dual-layer design leverages the real-time precision of SMC and the adaptive decision-making capability of DQL to achieve dynamic power sharing and balanced state-of-charge levels across storage units, thereby reducing asymmetric wear. Simulation results under variable operating scenarios showed that the proposed method significantly improvedvoltage stability, loweredthe occurrence of deep battery discharges, and decreased load shedding compared to conventional fuzzy-logic-based energymanagement, highlighting its effectiveness and resilience in the presence of renewable generation variability and fluctuating load demands. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

0 pages, 4435 KB  
Article
Federated Reinforcement Learning with Hybrid Optimization for Secure and Reliable Data Transmission in Wireless Sensor Networks (WSNs)
by Seyed Salar Sefati, Seyedeh Tina Sefati, Saqib Nazir, Roya Zareh Farkhady and Serban Georgica Obreja
Mathematics 2025, 13(19), 3196; https://doi.org/10.3390/math13193196 - 6 Oct 2025
Viewed by 107
Abstract
Wireless Sensor Networks (WSNs) consist of numerous battery-powered sensor nodes that operate with limited energy, computation, and communication capabilities. Designing routing strategies that are both energy-efficient and attack-resilient is essential for extending network lifetime and ensuring secure data delivery. This paper proposes Adaptive [...] Read more.
Wireless Sensor Networks (WSNs) consist of numerous battery-powered sensor nodes that operate with limited energy, computation, and communication capabilities. Designing routing strategies that are both energy-efficient and attack-resilient is essential for extending network lifetime and ensuring secure data delivery. This paper proposes Adaptive Federated Reinforcement Learning-Hunger Games Search (AFRL-HGS), a Hybrid Routing framework that integrates multiple advanced techniques. At the node level, tabular Q-learning enables each sensor node to act as a reinforcement learning agent, making next-hop decisions based on discretized state features such as residual energy, distance to sink, congestion, path quality, and security. At the network level, Federated Reinforcement Learning (FRL) allows the sink node to aggregate local Q-tables using adaptive, energy- and performance-weighted contributions, with Polyak-based blending to preserve stability. The binary Hunger Games Search (HGS) metaheuristic initializes Cluster Head (CH) selection and routing, providing a well-structured topology that accelerates convergence. Security is enforced as a constraint through a lightweight trust and anomaly detection module, which fuses reliability estimates with residual-based anomaly detection using Exponentially Weighted Moving Average (EWMA) on Round-Trip Time (RTT) and loss metrics. The framework further incorporates energy-accounted control plane operations with dual-format HELLO and hierarchical ADVERTISE/Service-ADVERTISE (SrvADVERTISE) messages to maintain the routing tables. Evaluation is performed in a hybrid testbed using the Graphical Network Simulator-3 (GNS3) for large-scale simulation and Kali Linux for live adversarial traffic injection, ensuring both reproducibility and realism. The proposed AFRL-HGS framework offers a scalable, secure, and energy-efficient routing solution for next-generation WSN deployments. Full article
Show Figures

Figure 1

0 pages, 4683 KB  
Article
Quantifying Tail Risk Spillovers in Chinese Petroleum Supply Chain Enterprises: A Neural-Network-Inspired Multi-Layer Machine Learning Framework
by Xin Zheng, Lei Wang, Tingqiang Chen and Tao Xu
Systems 2025, 13(10), 874; https://doi.org/10.3390/systems13100874 - 6 Oct 2025
Viewed by 102
Abstract
This study constructs a neural-network-inspired multi-layer machine learning model (RQLNet) to measure and analyze the effects of tail risk spillover and its associated sensitivities to macroeconomic factors among petroleum supply chain enterprises. On this basis, the [...] Read more.
This study constructs a neural-network-inspired multi-layer machine learning model (RQLNet) to measure and analyze the effects of tail risk spillover and its associated sensitivities to macroeconomic factors among petroleum supply chain enterprises. On this basis, the study constructs a tail risk spillover network and analyzes its network-level structural features. The results show the following: (1) The proposed model improves the accuracy of tail risk measurement while addressing the issue of excessive penalization in spillover weights, offering enhanced interpretability and structural stability and making it particularly suitable for high-dimensional tail risk estimation. (2) Tail risk spillovers propagate from up- and midstream to downstream and ultimately to end enterprises. Structurally, the up- and midstream are the main sources, whereas the downstream and end enterprises are the primary recipients. (3) The tail risk sensitivities of Chinese petroleum supply chain enterprises exhibit significant differences across macroeconomic factors and across types of enterprises. Overall, the sensitivities to CIMV and LS are higher. (4) The network evolves in stages: during trade frictions, spillovers accelerate and core nodes strengthen; during public-health events, intra-community cohesion increases and cross-community spillovers decline; in the recovery phase, cross-community links resume and concentrate on core nodes; and during geopolitical conflicts, spillovers are core-dominated and cross-community transmission accelerates. Full article
(This article belongs to the Section Complex Systems and Cybernetics)
Show Figures

Figure 1

19 pages, 685 KB  
Article
Intent-Based Resource Allocation in Edge and Cloud Computing Using Reinforcement Learning
by Dimitrios Konidaris, Polyzois Soumplis, Andreas Varvarigos and Panagiotis Kokkinos
Algorithms 2025, 18(10), 627; https://doi.org/10.3390/a18100627 - 4 Oct 2025
Viewed by 208
Abstract
Managing resource use in cloud and edge environments is crucial for optimizing performance and efficiency. Traditionally, this process is performed with detailed knowledge of the available infrastructure while being application-specific. However, it is common that users cannot accurately specify their applications’ low-level requirements, [...] Read more.
Managing resource use in cloud and edge environments is crucial for optimizing performance and efficiency. Traditionally, this process is performed with detailed knowledge of the available infrastructure while being application-specific. However, it is common that users cannot accurately specify their applications’ low-level requirements, and they tend to overestimate them—a problem further intensified by their lack of detailed knowledge on the infrastructure’s characteristics. In this context, resource orchestration mechanisms perform allocations based on the provided worst-case assumptions, with a direct impact on the performance of the whole infrastructure. In this work, we propose a resource orchestration mechanism based on intents, in which users provide their high-level workload requirements by specifying their intended preferences for how the workload should be managed, such as prioritizing high capacity, low cost, or other criteria. Building on this, the proposed mechanism dynamically assigns resources to applications through a Reinforcement Learning method leveraging the feedback from the users and infrastructure providers’ monitoring system. We formulate the respective problem as a discrete-time, finite horizon Markov decision process. Initially, we solve the problem using a tabular Q-learning method. However, due to the large state space inherent in real-world scenarios, we also employ Deep Reinforcement Learning, utilizing a neural network for the Q-value approximation. The presented mechanism is capable of continuously adapting the manner in which resources are allocated based on feedback from users and infrastructure providers. A series of simulation experiments were conducted to demonstrate the applicability of the proposed methodologies in intent-based resource allocation, examining various aspects and characteristics and performing comparative analysis. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
Show Figures

Figure 1

28 pages, 1332 KB  
Article
A Scalable Two-Level Deep Reinforcement Learning Framework for Joint WIP Control and Job Sequencing in Flow Shops
by Maria Grazia Marchesano, Guido Guizzi, Valentina Popolo and Anastasiia Rozhok
Appl. Sci. 2025, 15(19), 10705; https://doi.org/10.3390/app151910705 - 3 Oct 2025
Viewed by 188
Abstract
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN [...] Read more.
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN agent regulates global WIP to meet throughput targets, while a tactical DQN agent adaptively selects dispatching rules at the machine level on an event-driven basis. Parameter sharing in the tactical agent ensures inherent scalability, overcoming the combinatorial complexity of multi-machine scheduling. The agents coordinate indirectly via a shared simulation environment, learning to balance global stability with local responsiveness. The framework is validated through a discrete-event simulation integrating agent-based modelling, demonstrating consistent performance across multiple production scales (5–15 machines) and process time variabilities. Results show that the approach matches or surpasses analytical benchmarks and outperforms static rule-based strategies, highlighting its robustness, adaptability, and potential as a foundation for future Hierarchical Reinforcement Learning applications in manufacturing. Full article
(This article belongs to the Special Issue Intelligent Manufacturing and Production)
Show Figures

Figure 1

24 pages, 1024 KB  
Review
Artificial Intelligence in Glioma Diagnosis: A Narrative Review of Radiomics and Deep Learning for Tumor Classification and Molecular Profiling Across Positron Emission Tomography and Magnetic Resonance Imaging
by Rafail C. Christodoulou, Rafael Pitsillos, Platon S. Papageorgiou, Vasileia Petrou, Georgios Vamvouras, Ludwing Rivera, Sokratis G. Papageorgiou, Elena E. Solomou and Michalis F. Georgiou
Eng 2025, 6(10), 262; https://doi.org/10.3390/eng6100262 - 3 Oct 2025
Viewed by 411
Abstract
Background: This narrative review summarizes recent progress in artificial intelligence (AI), especially radiomics and deep learning, for non-invasive diagnosis and molecular profiling of gliomas. Methodology: A thorough literature search was conducted on PubMed, Scopus, and Embase for studies published from January [...] Read more.
Background: This narrative review summarizes recent progress in artificial intelligence (AI), especially radiomics and deep learning, for non-invasive diagnosis and molecular profiling of gliomas. Methodology: A thorough literature search was conducted on PubMed, Scopus, and Embase for studies published from January 2020 to July 2025, focusing on clinical and technical research. In key areas, these studies examine AI models’ predictive capabilities with multi-parametric Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). Results: The domains identified in the literature include the advancement of radiomic models for tumor grading and biomarker prediction, such as Isocitrate Dehydrogenase (IDH) mutation, O6-methylguanine-dna methyltransferase (MGMT) promoter methylation, and 1p/19q codeletion. The growing use of convolutional neural networks (CNNs) and generative adversarial networks (GANs) in tumor segmentation, classification, and prognosis was also a significant topic discussed in the literature. Deep learning (DL) methods are evaluated against traditional radiomics regarding feature extraction, scalability, and robustness to imaging protocol differences across institutions. Conclusions: This review analyzes emerging efforts to combine clinical, imaging, and histology data within hybrid or transformer-based AI systems to enhance diagnostic accuracy. Significant findings include the application of DL to predict cyclin-dependent kinase inhibitor 2A/B (CDKN2A/B) deletion and chemokine CCL2 expression. These highlight the expanding capabilities of imaging-based genomic inference and the importance of clinical data in multimodal fusion. Challenges such as data harmonization, model interpretability, and external validation still need to be addressed. Full article
Show Figures

Figure 1

22 pages, 1214 KB  
Article
Didactic Analysis of Natural Science Textbooks in Ecuador: A Critical Review from a Constructivist Perspective
by Frank Guerra-Reyes, Eric Guerra-Dávila and Edison Díaz-Martínez
Educ. Sci. 2025, 15(10), 1312; https://doi.org/10.3390/educsci15101312 - 2 Oct 2025
Viewed by 304
Abstract
School textbooks are central to the teaching, studying, and learning processes because they mediate the interaction between the prescribed curriculum and the educational experience in the classroom. Evaluating their didactic structure critically allows us to determine the degree to which they align with [...] Read more.
School textbooks are central to the teaching, studying, and learning processes because they mediate the interaction between the prescribed curriculum and the educational experience in the classroom. Evaluating their didactic structure critically allows us to determine the degree to which they align with current curriculum guidelines and promote meaningful learning. This study aimed to analyze the extent to which Ecuadorian natural science textbooks reflect constructivist learning principles and promote the development of key competencies established in the National Priority Curriculum. This curriculum guides the achievement of essential results and strengthens fundamental competencies for students’ comprehensive development. Content analysis was adopted as the methodological approach given its relevance in examining the didactic and curricular dimensions of educational materials. The analysis covered twelve eighth-grade General Basic Education textbooks and their supplementary materials. The analysis was based on two instruments: specialized summary analysis sheets (RAE) and a purpose-built checklist. The ATLAS.ti 25 and IRaMuTeQ programs supported the systematization and visualization of the data. The results showed limited integration of constructivist strategies, such as teaching for comprehension, inquiry-based learning, and problem solving, in most of the analyzed texts. These findings underscore the need to expand and strengthen the incorporation of contextualized, critical, and meaningful learning experiences to improve the didactic design of school textbooks. Such improvements would promote coherent articulation between objectives, content, methods, resources, and assessment in line with constructivist principles of the Ecuadorian curriculum. Furthermore, given these approaches’ affinity with curricular frameworks in other regional countries, the results could offer relevant guidance and starting points for reflection on developing and using textbooks in Latin American contexts with comparable educational characteristics. Full article
Show Figures

Figure 1

30 pages, 2037 KB  
Article
From Market Volatility to Predictive Insight: An Adaptive Transformer–RL Framework for Sentiment-Driven Financial Time-Series Forecasting
by Zhicong Song, Harris Sik-Ho Tsang, Richard Tai-Chiu Hsung, Yulin Zhu and Wai-Lun Lo
Forecasting 2025, 7(4), 55; https://doi.org/10.3390/forecast7040055 - 2 Oct 2025
Viewed by 232
Abstract
Financial time-series prediction remains a significant challenge, driven by market volatility, nonlinear dynamic characteristics, and the complex interplay between quantitative indicators and investor sentiment. Traditional time-series models (e.g., ARIMA and GARCH) struggle to capture the nuanced sentiment in textual data, while static deep [...] Read more.
Financial time-series prediction remains a significant challenge, driven by market volatility, nonlinear dynamic characteristics, and the complex interplay between quantitative indicators and investor sentiment. Traditional time-series models (e.g., ARIMA and GARCH) struggle to capture the nuanced sentiment in textual data, while static deep learning integration methods fail to adapt to market regime transitions (bull markets, bear markets, and consolidation). This study proposes a hybrid framework that integrates investor forum sentiment analysis with adaptive deep reinforcement learning (DRL) for dynamic model integration. By constructing a domain-specific financial sentiment dictionary (containing 16,673 entries) based on the sentiment analysis approach and word-embedding technique, we achieved up to 97.35% accuracy in forum title classification tasks. Historical price data and investor forum sentiment information were then fed into a Support Vector Regressor (SVR) and three Transformer variants (single-layer, multi-layer, and bidirectional variants) for predictions, with a Deep Q-Network (DQN) agent dynamically fusing the prediction results. Comprehensive experiments were conducted on diverse financial datasets, including China Unicom, the CSI 100 index, corn, and Amazon (AMZN). The experimental results demonstrate that our proposed approach, combining textual sentiment with adaptive DRL integration, significantly enhances prediction robustness in volatile markets, achieving the lowest RMSEs across diverse assets. It overcomes the limitations of static methods and multi-market generalization, outperforming both benchmark and state-of-the-art models. Full article
19 pages, 2476 KB  
Article
Deep Reinforcement Learning-Based DCT Image Steganography
by Rongjian Yang, Lixin Liu, Bin Han and Feng Hu
Mathematics 2025, 13(19), 3150; https://doi.org/10.3390/math13193150 - 2 Oct 2025
Viewed by 216
Abstract
In this article, we present a novel reinforcement learning-based framework in the discrete cosine transform to achieve better image steganography. First, the input image is divided into several blocks to extract semantic and structural features, evaluating their suitability for data embedding. Second, the [...] Read more.
In this article, we present a novel reinforcement learning-based framework in the discrete cosine transform to achieve better image steganography. First, the input image is divided into several blocks to extract semantic and structural features, evaluating their suitability for data embedding. Second, the Proximal Policy Optimization algorithm (PPO) is introduced in the block selection process to learn adaptive embedding policies, which effectively balances image fidelity and steganographic security. Moreover, the Deep Q-network (DQN) is used for adaptively adjusting the weights of the peak signal-to-noise ratio, structural similarity index, and detection accuracy in the reward formulation. Experimental results on the BOSSBase dataset confirm the superiority of our framework, achieving both lower detection rates and higher visual quality across a range of embedding payloads, particularly under low-bpp conditions. Full article
Show Figures

Figure 1

14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Viewed by 213
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

19 pages, 7222 KB  
Article
Multi-Channel Spectro-Temporal Representations for Speech-Based Parkinson’s Disease Detection
by Hadi Sedigh Malekroodi, Nuwan Madusanka, Byeong-il Lee and Myunggi Yi
J. Imaging 2025, 11(10), 341; https://doi.org/10.3390/jimaging11100341 - 1 Oct 2025
Viewed by 176
Abstract
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary [...] Read more.
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary time–frequency representations—mel spectrogram, constant-Q transform (CQT), and gammatone spectrogram—into a three-channel input analogous to an RGB image. This fused representation is evaluated across CNNs (ResNet, DenseNet, and EfficientNet) and Vision Transformer using the PC-GITA dataset, under 10-fold subject-independent cross-validation for robust assessment. Results showed that fusion consistently improves performance over single representations across architectures. EfficientNet-B2 achieves the highest accuracy (84.39% ± 5.19%) and F1-score (84.35% ± 5.52%), outperforming recent methods using handcrafted features or pretrained models (e.g., Wav2Vec2.0, HuBERT) on the same task and dataset. Performance varies with sentence type, with emotionally salient and prosodically emphasized utterances yielding higher AUC, suggesting that richer prosody enhances discriminability. Our findings indicate that multi-channel fusion enhances sensitivity to subtle speech impairments in PD by integrating complementary spectral information. Our approach implies that multi-channel fusion could enhance the detection of discriminative acoustic biomarkers, potentially offering a more robust and effective framework for speech-based PD screening, though further validation is needed before clinical application. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

43 pages, 1895 KB  
Article
Bi-Level Dependent-Chance Goal Programming for Paper Manufacturing Tactical Planning: A Reinforcement-Learning-Enhanced Approach
by Yassine Boutmir, Rachid Bannari, Abdelfettah Bannari, Naoufal Rouky, Othmane Benmoussa and Fayçal Fedouaki
Symmetry 2025, 17(10), 1624; https://doi.org/10.3390/sym17101624 - 1 Oct 2025
Viewed by 151
Abstract
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and [...] Read more.
Tactical production–distribution planning in paper manufacturing involves hierarchical decision-making under hybrid uncertainty, where aleatory randomness (demand fluctuations, machine variations) and epistemic uncertainty (expert judgments, market trends) simultaneously affect operations. Existing approaches fail to address the bi-level nature under hybrid uncertainty, treating production and distribution decisions independently or using single-paradigm uncertainty models. This research develops a bi-level dependent-chance goal programming framework based on uncertain random theory, where the upper level optimizes distribution decisions while the lower level handles production decisions. The framework exploits structural symmetries through machine interchangeability, symmetric transportation routes, and temporal symmetry, incorporating symmetry-breaking constraints to eliminate redundant solutions. A hybrid intelligent algorithm (HIA) integrates uncertain random simulation with a Reinforcement-Learning-enhanced Arithmetic Optimization Algorithm (RL-AOA) for bi-level coordination, where Q-learning enables adaptive parameter tuning. The RL component utilizes symmetric state representations to maintain solution quality across symmetric transformations. Computational experiments demonstrate HIA’s superiority over standard metaheuristics, achieving 3.2–7.8% solution quality improvement and 18.5% computational time reduction. Symmetry exploitation reduces search space by approximately 35%. The framework provides probability-based performance metrics with optimal confidence levels (0.82–0.87), offering 2.8–4.5% annual cost savings potential. Full article
Show Figures

Figure 1

Back to TopTop