Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,020)

Search Parameters:
Keywords = statistical learning models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 2927 KB  
Systematic Review
Real-Time Artificial Intelligence Versus Standard Colonoscopy in the Early Detection of Colorectal Cancer: A Systematic Review and Meta-Analysis
by Abdullah Sultany, Rahul Chikatimalla, Adishwar Rao, Mohamed A. Omar, Abdulkader Shaar, Hassam Ali, Fariha Hasan, Sheza Malik, Saqr Alsakarneh and Dushyant Singh Dahiya
Healthcare 2025, 13(19), 2517; https://doi.org/10.3390/healthcare13192517 - 3 Oct 2025
Abstract
Background: Colonoscopy remains the gold standard for colorectal cancer screening. Deep learning systems with real-time computer-aided polyp detection (CADe) demonstrate high accuracy in controlled research settings and preliminary randomized controlled trials (RCTs) report favorable outcomes in clinical settings. This study aims to evaluate [...] Read more.
Background: Colonoscopy remains the gold standard for colorectal cancer screening. Deep learning systems with real-time computer-aided polyp detection (CADe) demonstrate high accuracy in controlled research settings and preliminary randomized controlled trials (RCTs) report favorable outcomes in clinical settings. This study aims to evaluate the efficacy of AI-assisted colonoscopy compared to standard colonoscopy focusing on Polyp Detection Rate (PDR) and Adenoma Detection Rate (ADR), and to explore their implications for clinical practice. Methods: A systematic search was conducted using multiple indexing databases for RCTs comparing AI-assisted to standard colonoscopy. Random-effect models were utilized to calculate pooled odds ratios (ORs) with 95% confidence intervals. The risk of bias was assessed using the Cochrane Risk of Bias Tool, and heterogeneity was quantified using I statistics. Results: From 22,762 studies, 12 RCTs (n = 11,267) met the inclusion criteria. AI-assisted colonoscopy significantly improved PDR (OR 1.31, 95% CI 1.08–1.59, p = 0.005), despite heterogeneity among studies (I2 = 79%). While ADR showed improvement with AI-assisted colonoscopy (OR 1.24, 95% CI, 0.98–1.58, p = 0.08), the result was not statistically significant and had high heterogeneity (I2 = 81%). Conclusions: AI-assisted colonoscopy significantly enhances PDR, highlighting its potential role in colorectal cancer screening programs. However, while an improvement in the ADR was observed, the results were not statistically significant and showed considerable variability. These findings highlight the promise of AI in improving diagnostic accuracy but also point to the need for further research to better understand its impact on meaningful clinical outcomes. Full article
Show Figures

Figure 1

38 pages, 5753 KB  
Article
EfficientNet-B3-Based Automated Deep Learning Framework for Multiclass Endoscopic Bladder Tissue Classification
by A. A. Abd El-Aziz, Mahmood A. Mahmood and Sameh Abd El-Ghany
Diagnostics 2025, 15(19), 2515; https://doi.org/10.3390/diagnostics15192515 - 3 Oct 2025
Abstract
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor [...] Read more.
Background: Bladder cancer (BLCA) is a malignant growth that originates from the urothelial lining of the urinary bladder. Diagnosing BLCA is complex due to the variety of tumor features and its heterogeneous nature, which leads to significant morbidity and mortality. Understanding tumor histopathology is crucial for developing tailored therapies and improving patient outcomes. Objectives: Early diagnosis and treatment are essential to lower the mortality rate associated with bladder cancer. Manual classification of muscular tissues by pathologists is labor-intensive and relies heavily on experience, which can result in interobserver variability due to the similarities in cancerous cell morphology. Traditional methods for analyzing endoscopic images are often time-consuming and resource-intensive, making it difficult to efficiently identify tissue types. Therefore, there is a strong demand for a fully automated and reliable system for classifying smooth muscle images. Methods: This paper proposes a deep learning (DL) technique utilizing the EfficientNet-B3 model and a five-fold cross-validation method to assist in the early detection of BLCA. This model enables timely intervention and improved patient outcomes while streamlining the diagnostic process, ultimately reducing both time and costs for patients. We conducted experiments using the Endoscopic Bladder Tissue Classification (EBTC) dataset for multiclass classification tasks. The dataset was preprocessed using resizing and normalization methods to ensure consistent input. In-depth experiments were carried out utilizing the EBTC dataset, along with ablation studies to evaluate the best hyperparameters. A thorough statistical analysis and comparisons with five leading DL models—ConvNeXtBase, DenseNet-169, MobileNet, ResNet-101, and VGG-16—showed that the proposed model outperformed the others. Conclusions: The EfficientNet-B3 model achieved impressive results: accuracy of 99.03%, specificity of 99.30%, precision of 97.95%, recall of 96.85%, and an F1-score of 97.36%. These findings indicate that the EfficientNet-B3 model demonstrates significant potential in accurately and efficiently diagnosing BLCA. Its high performance and ability to reduce diagnostic time and cost make it a valuable tool for clinicians in the field of oncology and urology. Full article
(This article belongs to the Special Issue AI and Big Data in Medical Diagnostics)
24 pages, 8041 KB  
Article
Stable Water Isotopes and Machine Learning Approaches to Investigate Seawater Intrusion in the Magra River Estuary (Italy)
by Marco Sabattini, Francesco Ronchetti, Gianpiero Brozzo and Diego Arosio
Hydrology 2025, 12(10), 262; https://doi.org/10.3390/hydrology12100262 - 3 Oct 2025
Abstract
Seawater intrusion into coastal river systems poses increasing challenges for freshwater availability and estuarine ecosystem integrity, especially under evolving climatic and anthropogenic pressures. This study presents a multidisciplinary investigation of marine intrusion dynamics within the Magra River estuary (Northwest Italy), integrating field monitoring, [...] Read more.
Seawater intrusion into coastal river systems poses increasing challenges for freshwater availability and estuarine ecosystem integrity, especially under evolving climatic and anthropogenic pressures. This study presents a multidisciplinary investigation of marine intrusion dynamics within the Magra River estuary (Northwest Italy), integrating field monitoring, isotopic tracing (δ18O; δD), and multivariate statistical modeling. Over an 18-month period, 11 fixed stations were monitored across six seasonal campaigns, yielding a comprehensive dataset of water electrical conductivity (EC) and stable isotope measurements from fresh water to salty water. EC and oxygen isotopic ratios displayed strong spatial and temporal coherence (R2 = 0.99), confirming their combined effectiveness in identifying intrusion patterns. The mass-balance model based on δ18O revealed that marine water fractions exceeded 50% in the lower estuary for up to eight months annually, reaching as far as 8.5 km inland during dry periods. Complementary δD measurements provided additional insight into water origin and fractionation processes, revealing a slight excess relative to the local meteoric water line (LMWL), indicative of evaporative enrichment during anomalously warm periods. Multivariate regression models (PLS, Ridge, LASSO, and Elastic Net) identified river discharge as the primary limiting factor of intrusion, while wind intensity emerged as a key promoting variable, particularly when aligned with the valley axis. Tidal effects were marginal under standard conditions, except during anomalous events such as tidal surges. The results demonstrate that marine intrusion is governed by complex and interacting environmental drivers. Combined isotopic and machine learning approaches can offer high-resolution insights for environmental monitoring, early-warning systems, and adaptive resource management under climate-change scenarios. Full article
45 pages, 7902 KB  
Review
Artificial Intelligence-Guided Supervised Learning Models for Photocatalysis in Wastewater Treatment
by Asma Rehman, Muhammad Adnan Iqbal, Mohammad Tauseef Haider and Adnan Majeed
AI 2025, 6(10), 258; https://doi.org/10.3390/ai6100258 - 3 Oct 2025
Abstract
Artificial intelligence (AI), when integrated with photocatalysis, has demonstrated high predictive accuracy in optimizing photocatalytic processes for wastewater treatment using a variety of catalysts such as TiO2, ZnO, CdS, Zr, WO2, and CeO2. The progress of research [...] Read more.
Artificial intelligence (AI), when integrated with photocatalysis, has demonstrated high predictive accuracy in optimizing photocatalytic processes for wastewater treatment using a variety of catalysts such as TiO2, ZnO, CdS, Zr, WO2, and CeO2. The progress of research in this area is greatly enhanced by advancements in data science and AI, which enable rapid analysis of large datasets in materials chemistry. This article presents a comprehensive review and critical assessment of AI-based supervised learning models, including support vector machines (SVMs), artificial neural networks (ANNs), and tree-based algorithms. Their predictive capabilities have been evaluated using statistical metrics such as the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE), with numerous investigations documenting R2 values greater than 0.95 and RMSE values as low as 0.02 in forecasting pollutant degradation. To enhance model interpretability, Shapley Additive Explanations (SHAP) have been employed to prioritize the relative significance of input variables, illustrating, for example, that pH and light intensity frequently exert the most substantial influence on photocatalytic performance. These AI frameworks not only attain dependable predictions of degradation efficiency for dyes, pharmaceuticals, and heavy metals, but also contribute to economically viable optimization strategies and the identification of novel photocatalysts. Overall, this review provides evidence-based guidance for researchers and practitioners seeking to advance wastewater treatment technologies by integrating supervised machine learning with photocatalysis. Full article
Show Figures

Figure 1

24 pages, 1421 KB  
Article
Machine Learning-Aided Supply Chain Analysis of Waste Management Systems: System Optimization for Sustainable Production
by Zhe Wee Ng, Biswajit Debnath and Amit K Chattopadhyay
Sustainability 2025, 17(19), 8848; https://doi.org/10.3390/su17198848 - 2 Oct 2025
Abstract
Electronic-waste (e-waste) management is a key challenge in engineering smart cities due to its rapid accumulation, complex composition, sparse data availability, and significant environmental and economic impacts. This study employs a bespoke machine learning infrastructure on an Indian e-waste supply chain network (SCN) [...] Read more.
Electronic-waste (e-waste) management is a key challenge in engineering smart cities due to its rapid accumulation, complex composition, sparse data availability, and significant environmental and economic impacts. This study employs a bespoke machine learning infrastructure on an Indian e-waste supply chain network (SCN) focusing on the three pillars of sustainability—environmental, economic, and social. The economic resilience of the SCN is investigated against external perturbations, like market fluctuations or policy changes, by analyzing six stochastically perturbed modules, generated from the optimal point of the original dataset using Monte Carlo Simulation (MCS). In the process, MCS is demonstrated as a powerful technique to deal with sparse statistics in SCN modeling. The perturbed model is then analyzed to uncover “hidden” non-linear relationships between key variables and their sensitivity in dictating economic arbitrage. Two complementary ensemble-based approaches have been used—Feedforward Neural Network (FNN) model and Random Forest (RF) model. While FNN excels in regressing the model performance against the industry-specified target, RF is better in dealing with feature engineering and dimensional reduction, thus identifying the most influential variables. Our results demonstrate that the FNN model is a superior predictor of arbitrage conditions compared to the RF model. The tangible deliverable is a data-driven toolkit for smart engineering solutions to ensure sustainable e-waste management. Full article
14 pages, 879 KB  
Article
Predicting Factors Associated with Extended Hospital Stay After Postoperative ICU Admission in Hip Fracture Patients Using Statistical and Machine Learning Methods: A Retrospective Single-Center Study
by Volkan Alparslan, Sibel Balcı, Ayetullah Gök, Can Aksu, Burak İnner, Sevim Cesur, Hadi Ufuk Yörükoğlu, Berkay Balcı, Pınar Kartal Köse, Veysel Emre Çelik, Serdar Demiröz and Alparslan Kuş
Healthcare 2025, 13(19), 2507; https://doi.org/10.3390/healthcare13192507 - 2 Oct 2025
Abstract
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to [...] Read more.
Background: Hip fractures are common in the elderly and often require ICU admission post-surgery due to high ASA scores and comorbidities. Length of hospital stay after ICU is a crucial indicator affecting patient recovery, complication rates, and healthcare costs. This study aimed to develop and validate a machine learning-based model to predict the factors associated with extended hospital stay (>7 days from surgery to discharge) in hip fracture patients requiring postoperative ICU care. The findings could help clinicians optimize ICU bed utilization and improve patient management strategies. Methods: In this retrospective single-centre cohort study conducted in a tertiary ICU in Turkey (2017–2024), 366 ICU-admitted hip fracture patients were analysed. Conventional statistical analyses were performed using SPSS 29, including Mann–Whitney U and chi-squared tests. To identify independent predictors associated with extended hospital stay, Least Absolute Shrinkage and Selection Operator (LASSO) regression was applied for variable selection, followed by multivariate binary logistic regression analysis. In addition, machine learning models (binary logistic regression, random forest (RF), extreme gradient boosting (XGBoost) and decision tree (DT)) were trained to predict the likelihood of extended hospital stay, defined as the total number of days from the date of surgery until hospital discharge, including both ICU and subsequent ward stay. Model performance was evaluated using AUROC, F1 score, accuracy, precision, recall, and Brier score. SHAP (SHapley Additive exPlanations) values were used to interpret feature contributions in the XGBoost model. Results: The XGBoost model showed the best performance, except for precision. The XGBoost model gave an AUROC of 0.80, precision of 0.67, recall of 0.92, F1 score of 0.78, accuracy of 0.71 and Brier score of 0.18. According to SHAP analysis, time from fracture to surgery, hypoalbuminaemia and ASA score were the variables that most affected the length of stay of hospitalisation. Conclusions: The developed machine learning model successfully classified hip fracture patients into short and extended hospital stay groups following postoperative intensive care. This classification model has the potential to aid in patient flow management, resource allocation, and clinical decision support. External validation will further strengthen its applicability across different settings. Full article
Show Figures

Figure 1

18 pages, 1460 KB  
Article
AI-Based Severity Classification of Dementia Using Gait Analysis
by Gangmin Moon, Jaesung Cho, Hojin Choi, Yunjin Kim, Gun-Do Kim and Seong-Ho Jang
Sensors 2025, 25(19), 6083; https://doi.org/10.3390/s25196083 - 2 Oct 2025
Abstract
This study aims to explore the utility of artificial intelligence (AI) in classifying dementia severity based on gait analysis data and to examine how machine learning (ML) can address the limitations of conventional statistical approaches. The study included 34 individuals with mild cognitive [...] Read more.
This study aims to explore the utility of artificial intelligence (AI) in classifying dementia severity based on gait analysis data and to examine how machine learning (ML) can address the limitations of conventional statistical approaches. The study included 34 individuals with mild cognitive impairment (MCI), 25 with mild dementia, 26 with moderate dementia, and 54 healthy controls. A support vector machine (SVM) classifier was employed to categorize dementia severity using gait parameters. As complexity and high dimensionality of gait data increase, traditional statistical methods may struggle to capture subtle patterns and interactions among variables. In contrast, ML techniques, including dimensionality reduction methods such as principal component analysis (PCA) and gradient-based feature selection, can effectively identify key gait features relevant to dementia severity classification. This study shows that ML can complement traditional statistical analyses by efficiently handling high-dimensional data and uncovering meaningful patterns that may be overlooked by conventional methods. Our findings highlight the promise of AI-based tools in advancing our understanding of gait characteristics in dementia and supporting the development of more accurate diagnostic models for complex or large datasets. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 12417 KB  
Article
Optimizing EDM of Gunmetal with Al2O3-Enhanced Dielectric: Experimental Insights and Machine Learning Models
by Saumya Kanwal, Usha Sharma, Saurabh Chauhan, Anuj Kumar Sharma, Jitendra Kumar Katiyar, Rabesh Kumar Singh and Shalini Mohanty
Materials 2025, 18(19), 4578; https://doi.org/10.3390/ma18194578 - 2 Oct 2025
Abstract
This study investigates the optimization of electric discharge machining (EDM) parameters for gunmetal using copper electrodes in two different dielectric environments, which are conventional EDM oil and EDM oil infused with Al2O3 nanoparticles. A Taguchi L27 orthogonal array design was [...] Read more.
This study investigates the optimization of electric discharge machining (EDM) parameters for gunmetal using copper electrodes in two different dielectric environments, which are conventional EDM oil and EDM oil infused with Al2O3 nanoparticles. A Taguchi L27 orthogonal array design was used to evaluate the effects of current, voltage, and pulse-on time on Material Removal Rate (MRR), Electrode Wear Rate (EWR), and surface roughness (Ra, Rq, and Rz). Analysis of Variance (ANOVA) was used to statistically evaluate the influence of each parameter on machining performance. In addition, machine learning models including Linear Regression, Ridge Regression, Support Vector Regression, Random Forest, Gradient Boosting, and Neural Networks were implemented to predict performance outcomes. The originality of this research is not only rooted in the introduction of new models; rather, it is also found in the comparative analysis of various machine learning methodologies applied to the performance of electrical discharge machining (EDM) utilizing Al2O3-enhanced dielectrics. This investigation focuses specifically on gunmetal, a material that has not been extensively studied within this framework. The nanoparticle-enhanced dielectric demonstrated improved machining performance, achieving approximately 15% higher MRR, 20% lower EWR, and 10% improved surface finish compared to conventional EDM oil. Neural Networks consistently outperformed other models in predictive accuracy. Results indicate that the use of nanoparticle-infused dielectrics in EDM, coupled with data-driven optimization techniques, enhances productivity, tool life, and surface quality. Full article
(This article belongs to the Special Issue Non-conventional Machining: Materials and Processes)
Show Figures

Figure 1

26 pages, 5861 KB  
Article
Robust Industrial Surface Defect Detection Using Statistical Feature Extraction and Capsule Network Architectures
by Azeddine Mjahad and Alfredo Rosado-Muñoz
Sensors 2025, 25(19), 6063; https://doi.org/10.3390/s25196063 - 2 Oct 2025
Abstract
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, [...] Read more.
Automated quality control is critical in modern manufacturing, especially for metallic cast components, where fast and accurate surface defect detection is required. This study evaluates classical Machine Learning (ML) algorithms using extracted statistical parameters and deep learning (DL) architectures including ResNet50, Capsule Networks, and a 3D Convolutional Neural Network (CNN3D) using 3D image inputs. Using the Dataset Original, ML models with the selected parameters achieved high performance: RF reached 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, GB 96.0 ± 0.2% precision and 96.0 ± 0.2% sensitivity. ResNet50 trained with extracted parameters reached 98.0 ± 1.5% accuracy and 98.2 ± 1.7% F1-score. Capsule-based architectures achieved the best results, with ConvCapsuleLayer reaching 98.7 ± 0.2% accuracy and 100.0 ± 0.0% precision for the normal class, and 98.9 ± 0.2% F1-score for the affected class. CNN3D applied on 3D image inputs reached 88.61 ± 1.01% accuracy and 90.14 ± 0.95% F1-score. Using the Dataset Expanded with ML and PCA-selected features, Random Forest achieved 99.4 ± 0.2% precision and 99.4 ± 0.2% sensitivity, K-Nearest Neighbors 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, and SVM 99.2 ± 0.0% precision and 99.2 ± 0.0% sensitivity, demonstrating consistent high performance. All models were evaluated using repeated train-test splits to calculate averages of standard metrics (accuracy, precision, recall, F1-score), and processing times were measured, showing very low per-image execution times (as low as 3.69×104 s/image), supporting potential real-time industrial application. These results indicate that combining statistical descriptors with ML and DL architectures provides a robust and scalable solution for automated, non-destructive surface defect detection, with high accuracy and reliability across both the original and expanded datasets. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

18 pages, 748 KB  
Review
Statistical Methods for Multi-Omics Analysis in Neurodevelopmental Disorders: From High Dimensionality to Mechanistic Insight
by Manuel Airoldi, Veronica Remori and Mauro Fasano
Biomolecules 2025, 15(10), 1401; https://doi.org/10.3390/biom15101401 - 2 Oct 2025
Abstract
Neurodevelopmental disorders (NDDs), including autism spectrum disorder, intellectual disability, and attention-deficit/hyperactivity disorder, are genetically and phenotypically heterogeneous conditions affecting millions worldwide. High-throughput omics technologies—transcriptomics, proteomics, metabolomics, and epigenomics—offer a unique opportunity to link genetic variation to molecular and cellular mechanisms underlying these disorders. [...] Read more.
Neurodevelopmental disorders (NDDs), including autism spectrum disorder, intellectual disability, and attention-deficit/hyperactivity disorder, are genetically and phenotypically heterogeneous conditions affecting millions worldwide. High-throughput omics technologies—transcriptomics, proteomics, metabolomics, and epigenomics—offer a unique opportunity to link genetic variation to molecular and cellular mechanisms underlying these disorders. However, the high dimensionality, sparsity, batch effects, and complex covariance structures of omics data present significant statistical challenges, requiring robust normalization, batch correction, imputation, dimensionality reduction, and multivariate modeling approaches. This review provides a comprehensive overview of statistical frameworks for analyzing high-dimensional omics datasets in NDDs, including univariate and multivariate models, penalized regression, sparse canonical correlation analysis, partial least squares, and integrative multi-omics methods such as DIABLO, similarity network fusion, and MOFA. We illustrate how these approaches have revealed convergent molecular signatures—synaptic, mitochondrial, and immune dysregulation—across transcriptomic, proteomic, and metabolomic layers in human cohorts and experimental models. Finally, we discuss emerging strategies, including single-cell and spatially resolved omics, machine learning-driven integration, and longitudinal multi-modal analyses, highlighting their potential to translate complex molecular patterns into mechanistic insights, biomarkers, and therapeutic targets. Integrative multi-omics analyses, grounded in rigorous statistical methodology, are poised to advance mechanistic understanding and precision medicine in NDDs. Full article
(This article belongs to the Section Bioinformatics and Systems Biology)
Show Figures

Figure 1

30 pages, 4602 KB  
Article
Intelligent Fault Diagnosis of Ball Bearing Induction Motors for Predictive Maintenance Industrial Applications
by Vasileios I. Vlachou, Theoklitos S. Karakatsanis, Stavros D. Vologiannidis, Dimitrios E. Efstathiou, Elisavet L. Karapalidou, Efstathios N. Antoniou, Agisilaos E. Efraimidis, Vasiliki E. Balaska and Eftychios I. Vlachou
Machines 2025, 13(10), 902; https://doi.org/10.3390/machines13100902 - 2 Oct 2025
Abstract
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, [...] Read more.
Induction motors (IMs) are crucial in many industrial applications, offering a cost-effective and reliable source of power transmission and generation. However, their continuous operation imposes considerable stress on electrical and mechanical parts, leading to progressive wear that can cause unexpected system shutdowns. Bearings, which enable shaft motion and reduce friction under varying loads, are the most failure-prone components, with bearing ball defects representing most severe mechanical failures. Early and accurate fault diagnosis is therefore essential to prevent damage and ensure operational continuity. Recent advances in the Internet of Things (IoT) and machine learning (ML) have enabled timely and effective predictive maintenance strategies. Among various diagnostic parameters, vibration analysis has proven particularly effective for detecting bearing faults. This study proposes a hybrid diagnostic framework for induction motor bearings, combining vibration signal analysis with Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) in an IoT-enabled Industry 4.0 architecture. Statistical and frequency-domain features were extracted, reduced using Principal Component Analysis (PCA), and classified with SVMs and ANNs, achieving over 95% accuracy. The novelty of this work lies in the hybrid integration of interpretable and non-linear ML models within an IoT-based edge–cloud framework. Its main contribution is a scalable and accurate real-time predictive maintenance solution, ensuring high diagnostic reliability and seamless integration in Industry 4.0 environments. Full article
(This article belongs to the Special Issue Vibration Detection of Induction and PM Motors)
Show Figures

Figure 1

14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

31 pages, 1105 KB  
Article
MoCap-Impute: A Comprehensive Benchmark and Comparative Analysis of Imputation Methods for IMU-Based Motion Capture Data
by Mahmoud Bekhit, Ahmad Salah, Ahmed Salim Alrawahi, Tarek Attia, Ahmed Ali, Esraa Eldesouky and Ahmed Fathalla
Information 2025, 16(10), 851; https://doi.org/10.3390/info16100851 - 1 Oct 2025
Abstract
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and [...] Read more.
Motion capture (MoCap) data derived from wearable Inertial Measurement Units is essential to applications in sports science and healthcare robotics. However, a significant amount of the potential of this data is limited due to missing data derived from sensor limitations, network issues, and environmental interference. Such limitations can introduce bias, prevent the fusion of critical data streams, and ultimately compromise the integrity of human activity analysis. Despite the plethora of data imputation techniques available, there have been few systematic performance evaluations of these techniques explicitly for the time series data of IMU-derived MoCap data. We address this by evaluating the imputation performance across three distinct contexts: univariate time series, multivariate across players, and multivariate across kinematic angles. To address this limitation, we propose a systematic comparative analysis of imputation techniques, including statistical, machine learning, and deep learning techniques, in this paper. We also introduce the first publicly available MoCap dataset specifically for the purpose of benchmarking missing value imputation, with three missingness mechanisms: missing completely at random, block missingness, and a simulated value-dependent missingness pattern simulated at the signal transition points. Using data from 53 karate practitioners performing standardized movements, we artificially generated missing values to create controlled experimental conditions. We performed experiments across the 53 subjects with 39 kinematic variables, which showed that discriminating between univariate and multivariate imputation frameworks demonstrates that multivariate imputation frameworks surpassunivariate approaches when working with more complex missingness mechanisms. Specifically, multivariate approaches achieved up to a 50% error reduction (with the MAE improving from 10.8 ± 6.9 to 5.8 ± 5.5) compared to univariate methods for transition point missingness. Specialized time series deep learning models (i.e., SAITS, BRITS, GRU-D) demonstrated a superior performance with MAE values consistently below 8.0 for univariate contexts and below 3.2 for multivariate contexts across all missing data percentages, significantly surpassing traditional machine learning and statistical methods. Notable traditional methods such as Generative Adversarial Imputation Networks and Iterative Imputers exhibited a competitive performance but remained less stable than the specialized temporal models. This work offers an important baseline for future studies, in addition to recommendations for researchers looking to increase the accuracy and robustness of MoCap data analysis, as well as integrity and trustworthiness. Full article
(This article belongs to the Section Information Processes)
19 pages, 3159 KB  
Article
Optimizing Traffic Accident Severity Prediction with a Stacking Ensemble Framework
by Imad El Mallahi, Jamal Riffi, Hamid Tairi, Nikola S. Nikolov, Mostafa El Mallahi and Mohamed Adnane Mahraz
World Electr. Veh. J. 2025, 16(10), 561; https://doi.org/10.3390/wevj16100561 - 1 Oct 2025
Abstract
Road traffic crashes (RTCs) have emerged as a major global cause of fatalities, with the number of accident-related deaths rising rapidly each day. To mitigate this issue, it is essential to develop early prediction methods that help drivers and riders understand accident statistics [...] Read more.
Road traffic crashes (RTCs) have emerged as a major global cause of fatalities, with the number of accident-related deaths rising rapidly each day. To mitigate this issue, it is essential to develop early prediction methods that help drivers and riders understand accident statistics relevant to their region. These methods should consider key factors such as speed limits, compliance with traffic signs and signals, pedestrian crossings, right-of-way rules, weather conditions, driver negligence, fatigue, and the impact of excessive speed on RTC occurrences. Raising awareness of these factors enables individuals to exercise greater caution, thereby contributing to accident prevention. A promising approach to improving road traffic accident severity classification is the stacking ensemble method, which leverages multiple machine learning models. This technique addresses challenges such as imbalanced datasets and high-dimensional features by combining predictions from various base models into a meta-model, ultimately enhancing classification accuracy. The ensemble approach exploits the diverse strengths of different models, capturing multiple aspects of the data to improve predictive performance. The effectiveness of stacking depends on the careful selection of base models with complementary strengths, ensuring robust and reliable predictions. Additionally, advanced feature engineering and selection techniques can further optimize the model’s performance. Within the field of artificial intelligence, various machine learning (ML) techniques have been explored to support decision making in tackling RTC-related issues. These methods aim to generate precise reports and insights. However, the stacking method has demonstrated significantly superior performance compared to existing approaches, making it a valuable tool for improving road safety. Full article
Show Figures

Figure 1

12 pages, 1169 KB  
Article
Demographic, Morphological, and Histopathological Characteristics of Melanoma and Nevi: Insights from Statistical Analysis and Machine Learning Models
by Blagjica Lazarova, Gordana Petrushevska, Zdenka Stojanovska and Stephen C. Mullins
Diagnostics 2025, 15(19), 2499; https://doi.org/10.3390/diagnostics15192499 - 1 Oct 2025
Abstract
Background: Early and accurate differentiation between melanomas and benign nevi is essential for making proper clinical decisions. This study aimed to identify clinical, morphological, and histopathological variables most strongly associated with melanoma, using both statistical and machine learning approaches. Methods: This study [...] Read more.
Background: Early and accurate differentiation between melanomas and benign nevi is essential for making proper clinical decisions. This study aimed to identify clinical, morphological, and histopathological variables most strongly associated with melanoma, using both statistical and machine learning approaches. Methods: This study evaluated 184 melanocytic lesions using clinical, morphological, and histopathological parameters. Univariable analyses were performed in XLStat statistical software, version 2014.5.03, while multivariable machine learning models were developed in Jamovi (version 2.4). Five supervised algorithms (random forest, partial least squares, elastic net regression, conditional inference trees, and k-nearest neighbors) were compared using repeated cross-validation, with performance evaluated by accuracy, Kappa, sensitivity, specificity, F1 score, and calibration. Results: Univariable analysis identified significant differences between melanomas and nevi in age, horizontal diameter, gender, lesion location, and selected histopathological features (cytological and extracellular matrix changes, epidermal interactions). However, several associations weakened in multivariable analysis due to collinearity and overlapping effects. Using glmnet, the most influential independent predictors were cytological changes, horizontal diameter, epidermal interactions, and extracellular matrix features, alongside age, gender, and lesion location. The model achieved high discrimination (AUC = 0.97, 95% CI: 0.93–0.99) and accuracy (training: 95.3%; test: 92.6%), confirming robustness. Conclusions: Structured demographic, morphological, and histopathological data—particularly age, lesion size, cytological and extracellular matrix changes, and epidermal interactions—can effectively support classification of melanocytic lesions. Machine learning approaches (the glmnet model in our study) provide a reliable framework to evaluate such predictors and offer practical diagnostic support in dermatopathology. Full article
(This article belongs to the Special Issue Artificial Intelligence in Dermatology)
Show Figures

Figure 1

Back to TopTop