Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,987)

Search Parameters:
Keywords = deep supervised learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 815 KB  
Review
Next-Generation Machine Learning in Healthcare Fraud Detection: Current Trends, Challenges, and Future Research Directions
by Kamran Razzaq and Mahmood Shah
Information 2025, 16(9), 730; https://doi.org/10.3390/info16090730 (registering DOI) - 25 Aug 2025
Abstract
The growing complexity and size of healthcare systems have rendered fraud detection increasingly challenging; however, the current literature lacks a holistic view of the latest machine learning (ML) techniques with practical implementation concerns. The present study addresses this gap by highlighting the importance [...] Read more.
The growing complexity and size of healthcare systems have rendered fraud detection increasingly challenging; however, the current literature lacks a holistic view of the latest machine learning (ML) techniques with practical implementation concerns. The present study addresses this gap by highlighting the importance of machine learning (ML) in preventing and mitigating healthcare fraud, evaluating recent advancements, investigating implementation barriers, and exploring future research dimensions. To further address the limited research on the evaluation of machine learning (ML) and hybrid approaches, this study considers a broad spectrum of ML techniques, including supervised ML, unsupervised ML, deep learning, and hybrid ML approaches such as SMOTE-ENN, explainable AI, federated learning, and ensemble learning. The study also explored their potential use in enhancing fraud detection in imbalanced and multidimensional datasets. A significant finding of the study was the identification of commonly employed datasets, such as Medicare, the List of Excluded Individuals and Entities (LEIE), and Kaggle datasets, which serve as a baseline for evaluating machine learning (ML) models. The study’s findings comprehensively identify the challenges of employing machine learning (ML) in healthcare systems, including data quality, system scalability, regulatory compliance, and resource constraints. The study provides actionable insights, such as model interpretability to enable regulatory compliance and federated learning for confidential data sharing, which is particularly relevant for policymakers, healthcare providers, and insurance companies that intend to deploy a robust, scalable, and secure fraud detection infrastructure. The study presents a comprehensive framework for enhancing real-time healthcare fraud detection through self-learning, interpretable, and safe machine learning (ML) infrastructures, integrating theoretical advancements with practical application needs. Full article
Show Figures

Figure 1

25 pages, 4100 KB  
Article
An Adaptive Unsupervised Learning Approach for Credit Card Fraud Detection
by John Adejoh, Nsikak Owoh, Moses Ashawa, Salaheddin Hosseinzadeh, Alireza Shahrabi and Salma Mohamed
Big Data Cogn. Comput. 2025, 9(9), 217; https://doi.org/10.3390/bdcc9090217 - 25 Aug 2025
Abstract
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained [...] Read more.
Credit card fraud remains a major cause of financial loss around the world. Traditional fraud detection methods that rely on supervised learning often struggle because fraudulent transactions are rare compared to legitimate ones, leading to imbalanced datasets. Additionally, the models must be retrained frequently, as fraud patterns change over time and require new labeled data for retraining. To address these challenges, this paper proposes an ensemble unsupervised learning approach for credit card fraud detection that combines Autoencoders (AEs), Self-Organizing Maps (SOMs), and Restricted Boltzmann Machines (RBMs), integrated with an Adaptive Reconstruction Threshold (ART) mechanism. The ART dynamically adjusts anomaly detection thresholds by leveraging the clustering properties of SOMs, effectively overcoming the limitations of static threshold approaches in machine learning and deep learning models. The proposed models, AE-ASOMs (Autoencoder—Adaptive Self-Organizing Maps) and RBM-ASOMs (Restricted Boltzmann Machines—Adaptive Self-Organizing Maps), were evaluated on the Kaggle Credit Card Fraud Detection and IEEE-CIS datasets. Our AE-ASOM model achieved an accuracy of 0.980 and an F1-score of 0.967, while the RBM-ASOM model achieved an accuracy of 0.975 and an F1-score of 0.955. Compared to models such as One-Class SVM and Isolation Forest, our approach demonstrates higher detection accuracy and significantly reduces false positive rates. In addition to its performance, the model offers considerable computational efficiency with a training time of 200.52 s and memory usage of 3.02 megabytes. Full article
Show Figures

Figure 1

44 pages, 4243 KB  
Review
AI-Powered Building Ecosystems: A Narrative Mapping Review on the Integration of Digital Twins and LLMs for Proactive Comfort, IEQ, and Energy Management
by Bibars Amangeldy, Nurdaulet Tasmurzayev, Timur Imankulov, Zhanel Baigarayeva, Nurdaulet Izmailov, Tolebi Riza, Abdulaziz Abdukarimov, Miras Mukazhan and Bakdaulet Zhumagulov
Sensors 2025, 25(17), 5265; https://doi.org/10.3390/s25175265 - 24 Aug 2025
Abstract
Artificial intelligence (AI) is now the computational core of smart building automation, acting across the entire cyber–physical stack. This review surveys peer-reviewed work on the integration of AI with indoor environmental quality (IEQ) and energy performance, distinguishing itself by presenting a holistic synthesis [...] Read more.
Artificial intelligence (AI) is now the computational core of smart building automation, acting across the entire cyber–physical stack. This review surveys peer-reviewed work on the integration of AI with indoor environmental quality (IEQ) and energy performance, distinguishing itself by presenting a holistic synthesis of the complete technological evolution from IoT sensors to generative AI. We uniquely frame this progression within a human-centric architecture that integrates digital twins of both the building (DT-B) and its occupants (DT-H), providing a forward-looking perspective on occupant comfort and energy management. We find that deep reinforcement learning (DRL) agents, often developed within physics-calibrated digital twins, reduce annual HVAC demand by 10–35% while maintaining an operative temperature within ±0.5 °C and CO2 below 800 ppm. These comfort and IAQ targets are consistent with ASHRAE Standard 55 (thermal environmental conditions) and ASHRAE Standard 62.1 (ventilation for acceptable indoor air quality); keeping the operative temperature within ±0.5 °C of the setpoint and indoor CO2 near or below ~800 ppm reflects commonly adopted control tolerances and per-person outdoor air supply objectives. Regarding energy impacts, simulation studies commonly report higher double-digit reductions, whereas real building deployments typically achieve single- to low-double-digit savings; we therefore report simulation and field results separately. Supervised learners, including gradient boosting and various neural networks, achieve 87–97% accuracy for short-term load, comfort, and fault forecasting. Furthermore, unsupervised models successfully mine large-scale telemetry for anomalies and occupancy patterns, enabling adaptive ventilation that can cut sick building complaints by 40%. Despite these gains, deployment is hindered by fragmented datasets, interoperability issues between legacy BAS and modern IoT devices, and the computer energy and privacy–security costs of large models. The key research priorities include (1) open, high-fidelity IEQ benchmarks; (2) energy-aware, on-device learning architectures; (3) privacy-preserving federated frameworks; (4) hybrid, physics-informed models to win operator trust. Addressing these challenges is pivotal for scaling AI from isolated pilots to trustworthy, human-centric building ecosystems. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

21 pages, 39236 KB  
Article
Adaptive Image Deblurring Convolutional Neural Network with Meta-Tuning
by Quoc-Thien Ho, Minh-Thien Duong, Seongsoo Lee and Min-Cheol Hong
Sensors 2025, 25(16), 5211; https://doi.org/10.3390/s25165211 - 21 Aug 2025
Viewed by 225
Abstract
Motion blur is a complex phenomenon caused by the relative movement between an observed object and an imaging sensor during the exposure time, resulting in degradation in the image quality. Deep-learning-based methods, particularly convolutional neural networks (CNNs), have shown promise in motion deblurring. [...] Read more.
Motion blur is a complex phenomenon caused by the relative movement between an observed object and an imaging sensor during the exposure time, resulting in degradation in the image quality. Deep-learning-based methods, particularly convolutional neural networks (CNNs), have shown promise in motion deblurring. However, the small kernel sizes of CNNs limit their ability to achieve optimal performance. Moreover, supervised deep-learning-based deblurring methods often exhibit overfitting in their training datasets. Models trained on widely used synthetic blur datasets frequently fail to generalize in other blur domains in real-world scenarios and often produce undesired artifacts. To address these challenges, we propose the Spatial Feature Selection Network (SFSNet), which incorporates a Regional Feature Extractor (RFE) module to expand the receptive field and effectively select critical spatial features in order to improve the deblurring performance. In addition, we present the BlurMix dataset, which includes diverse blur types, as well as a meta-tuning strategy for effective blur domain adaptation. Our method enables the network to rapidly adapt to novel blur distributions with minimal additional training, and thereby improve generalization. The experimental results show that the meta-tuning variant of the SFSNet eliminates unwanted artifacts and significantly improves the deblurring performance across various blur domains. Full article
Show Figures

Figure 1

36 pages, 6877 KB  
Article
Machine Learning for Reservoir Quality Prediction in Chlorite-Bearing Sandstone Reservoirs
by Thomas E. Nichols, Richard H. Worden, James E. Houghton, Joshua Griffiths, Christian Brostrøm and Allard W. Martinius
Geosciences 2025, 15(8), 325; https://doi.org/10.3390/geosciences15080325 - 19 Aug 2025
Viewed by 168
Abstract
We have developed a generalisable machine learning framework for reservoir quality prediction in deeply buried clastic systems. Applied to the Lower Jurassic deltaic sandstones of the Tilje Formation (Halten Terrace, North Sea), the approach integrates sedimentological facies modelling with mineralogical and petrophysical prediction [...] Read more.
We have developed a generalisable machine learning framework for reservoir quality prediction in deeply buried clastic systems. Applied to the Lower Jurassic deltaic sandstones of the Tilje Formation (Halten Terrace, North Sea), the approach integrates sedimentological facies modelling with mineralogical and petrophysical prediction in a single workflow. Using supervised Extreme Gradient Boosting (XGBoost) models, we classify reservoir facies, predict permeability directly from standard wireline log parameters and estimate the abundance of porosity-preserving grain coating chlorite (gamma ray, neutron porosity, caliper, photoelectric effect, bulk density, compressional and shear sonic, and deep resistivity). Model development and evaluation employed stratified K-fold cross-validation to preserve facies proportions and mineralogical variability across folds, supporting robust performance assessment and testing generalisability across a geologically heterogeneous dataset. Core description, point count petrography, and core plug analyses were used for ground truthing. The models distinguish chlorite-associated facies with up to 80% accuracy and estimate permeability with a mean absolute error of 0.782 log(mD), improving substantially on conventional regression-based approaches. The models also enable prediction, for the first time using wireline logs, grain-coating chlorite abundance with a mean absolute error of 1.79% (range 0–16%). The framework takes advantage of diagnostic petrophysical responses associated with chlorite and high porosity, yielding geologically consistent and interpretable results. It addresses persistent challenges in characterising thinly bedded, heterogeneous intervals beyond the resolution of traditional methods and is transferable to other clastic reservoirs, including those considered for carbon storage and geothermal applications. The workflow supports cost-effective, high-confidence subsurface characterisation and contributes a flexible methodology for future work at the interface of geoscience and machine learning. Full article
Show Figures

Figure 1

18 pages, 1012 KB  
Article
UNet-INSN: Self-Supervised Algorithm for Impulsive Noise Suppression in Power Line Communication
by Enguo Zhu, Yi Ren, Ran Li, Shuiqing Ouyang, Yang Ma, Ximin Yang and Guojin Liu
Appl. Sci. 2025, 15(16), 9101; https://doi.org/10.3390/app15169101 - 19 Aug 2025
Viewed by 272
Abstract
Impulsive noise suppression plays a crucial role in enhancing the reliability of power line communication (PLC). In view of the rapid advancement of deep learning methodologies, recently, studies on deep learning-based impulsive noise suppression have garnered extensive attention. Nevertheless, on one hand, the [...] Read more.
Impulsive noise suppression plays a crucial role in enhancing the reliability of power line communication (PLC). In view of the rapid advancement of deep learning methodologies, recently, studies on deep learning-based impulsive noise suppression have garnered extensive attention. Nevertheless, on one hand, the training of deep learning-based impulsive noise suppression models relies on a large number of labeled data, whose acquisition incurs high costs. On the other hand, the currently proposed models struggle to adapt to the dynamic variations in impulsive noise distributions. To address these two issues, in this paper, a UNet-based self-supervised learning model for impulsive noise suppression (UNet-INSN) is proposed. Firstly, by using the designed global mask mapper, UNet-INSN can utilize the entire noisy signal for model training, resolving the information loss issue caused by partial signal masking in traditional mask-driven algorithms. Secondly, a reproducibility loss function is introduced to effectively prevent the model from degenerating into an identity mapping, thereby enhancing the denoising performance of UNet-INSN. Simulation results show that the required SNRs for the proposed algorithm to achieve a bit error rate of 10−6 under ideal and non-ideal conditions are 12 dB and 26 dB, respectively, significantly outperforming comparison methods. Moreover, it still exhibits excellent robustness and generalization capabilities when the impulsive noise distribution changes dynamically. Full article
(This article belongs to the Special Issue Advanced Communication and Networking Technology for Smart Grid)
Show Figures

Figure 1

22 pages, 4350 KB  
Review
A Review of Artificial Intelligence Techniques in Fault Diagnosis of Electric Machines
by Christos Zachariades and Vigila Xavier
Sensors 2025, 25(16), 5128; https://doi.org/10.3390/s25165128 - 18 Aug 2025
Viewed by 413
Abstract
Rotating electrical machines are critical assets in industrial systems, where unexpected failures can lead to costly downtime and safety risks. This review presents a comprehensive and up-to-date analysis of artificial intelligence (AI) techniques for fault diagnosis in electric machines. It categorizes and evaluates [...] Read more.
Rotating electrical machines are critical assets in industrial systems, where unexpected failures can lead to costly downtime and safety risks. This review presents a comprehensive and up-to-date analysis of artificial intelligence (AI) techniques for fault diagnosis in electric machines. It categorizes and evaluates supervised, unsupervised, deep learning, and hybrid/ensemble approaches in terms of diagnostic accuracy, adaptability, and implementation complexity. A comparative analysis highlights the strengths and limitations of each method, while emerging trends such as explainable AI, self-supervised learning, and digital twin integration are discussed as enablers of next-generation diagnostic systems. To support practical deployment, the article proposes a modular implementation framework and offers actionable recommendations for practitioners. This work serves as both a reference and a guide for researchers and engineers aiming to develop scalable, interpretable, and robust AI-driven fault diagnosis solutions for rotating electrical machines. Full article
(This article belongs to the Special Issue Sensors for Fault Diagnosis of Electric Machines)
Show Figures

Figure 1

16 pages, 1540 KB  
Article
Feature Selection Strategies for Deep Learning-Based Classification in Ultra-High-Dimensional Genomic Data
by Krzysztof Kotlarz, Dawid Słomian, Weronika Zawadzka and Joanna Szyda
Int. J. Mol. Sci. 2025, 26(16), 7961; https://doi.org/10.3390/ijms26167961 - 18 Aug 2025
Viewed by 246
Abstract
The advancement of high-throughput sequencing has revolutionised genomic research by generating large amounts of data. However, Whole-Genome Sequencing is associated with a statistical challenge known as the p >> n problem. We classified 1825 individuals into five breeds based on 11,915,233 SNPs. First, [...] Read more.
The advancement of high-throughput sequencing has revolutionised genomic research by generating large amounts of data. However, Whole-Genome Sequencing is associated with a statistical challenge known as the p >> n problem. We classified 1825 individuals into five breeds based on 11,915,233 SNPs. First, three feature selection algorithms were applied: SNP-tagging and two approaches based on supervised rank aggregation, followed by either one-dimensional (1D-SRA) or multidimensional (MD-SRA) feature clustering. Individuals were then classified into breeds using a deep learning classifier composed of Convolutional Neural Networks. SNPs selected by SNP-tagging yielded the least satisfactory F1-score (86.87%); however, this approach offered rapid computing time. The 1D-SRA was less suitable for ultra-high-dimensional data due to computational, memory, and storage limitations. However, the SNP set selected by this algorithm provided the best classification quality (96.81%). MD-SRA provided a good balance between classification quality (95.12%) and computational efficiency (17x lower analysis time, 14x lower data storage). Unlike SNP-tagging, SRA-based approaches are universal and are not limited to genomic data. This study addressed the demand for efficient computational and statistical tools for feature selection in high-dimensional genomic data. The results demonstrate that the proposed MD-SRA is suitable for the classification of high-dimensional data. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

21 pages, 3126 KB  
Article
CViT Weakly Supervised Network Fusing Dual-Branch Local-Global Features for Hyperspectral Image Classification
by Wentao Fu, Xiyan Sun, Xiuhua Zhang, Yuanfa Ji and Jiayuan Zhang
Entropy 2025, 27(8), 869; https://doi.org/10.3390/e27080869 - 15 Aug 2025
Viewed by 312
Abstract
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant [...] Read more.
In hyperspectral image (HSI) classification, feature learning and label accuracy play a crucial role. In actual hyperspectral scenes, however, noisy labels are unavoidable and seriously impact the performance of methods. While deep learning has achieved remarkable results in HSI classification tasks, its noise-resistant performance usually comes at the cost of feature representation capabilities. High-dimensional and deep convolution can capture rich deep semantic features, but with high complexity and resource consumption. To deal with these problems, we propose a CViT Weakly Supervised Network (CWSN) for HSI classification. Specifically, a lightweight 1D-2D two-branch network is used for local generalization and enhancement of spatial–spectral features. Then, the fusion and characterization of local and global features are achieved through the CNN-Vision Transformer (CViT) cascade strategy. The experimental results on four benchmark HSI datasets show that CWSN has good anti-noise ability and ensures the robustness and versatility of the network facing both clean and noisy training sets. Compared to other methods, the CWSN has better classification accuracy. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

24 pages, 2709 KB  
Article
Unsupervised Person Re-Identification via Deep Attribute Learning
by Shun Zhang, Yaohui Xu, Xuebin Zhang, Boyang Cheng and Ke Wang
Future Internet 2025, 17(8), 371; https://doi.org/10.3390/fi17080371 - 15 Aug 2025
Viewed by 255
Abstract
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual [...] Read more.
Driven by growing public security demands and the advancement of intelligent surveillance systems, person re-identification (ReID) has emerged as a prominent research focus in the field of computer vision. However, this task presents challenges due to its high sensitivity to variations in visual appearance caused by factors such as body pose and camera parameters. Although deep learning-based methods have achieved marked progress in ReID, the high cost of annotation remains a challenge that cannot be overlooked. To address this, we propose an unsupervised attribute learning framework that eliminates the need for costly manual annotations while maintaining high accuracy. The framework learns the mid-level human attributes (such as clothing type and gender) that are robust to substantial visual appearance variations and can hence boost the accuracy of attributes with a small amount of labeled data. To carry out our framework, we present a part-based convolutional neural network (CNN) architecture, which consists of two components for image and body attribute learning on a global level and upper- and lower-body image and attribute learning at a local level. The proposed architecture is trained to learn attribute-semantic and identity-discriminative feature representations simultaneously. For model learning, we first train our part-based network using a supervised approach on a labeled attribute dataset. Then, we apply an unsupervised clustering method to assign pseudo-labels to unlabeled images in a target dataset using our trained network. To improve feature compatibility, we introduce an attribute consistency scheme for unsupervised domain adaptation on this unlabeled target data. During training on the target dataset, we alternately perform three steps: extracting features with the updated model, assigning pseudo-labels to unlabeled images, and fine-tuning the model. Through a unified framework that fuses complementary attribute-label and identity label information, our approach achieves considerable improvements of 10.6% and 3.91% mAP on Market-1501→DukeMTMC-ReID and DukeMTMC-ReID→Market-1501 unsupervised domain adaptation tasks, respectively. Full article
(This article belongs to the Special Issue Advances in Deep Learning and Next-Generation Internet Technologies)
Show Figures

Figure 1

21 pages, 4714 KB  
Article
Automatic Scribble Annotations Based Semantic Segmentation Model for Seedling-Stage Maize Images
by Zhaoyang Li, Xin Liu, Hanbing Deng, Yuncheng Zhou and Teng Miao
Agronomy 2025, 15(8), 1972; https://doi.org/10.3390/agronomy15081972 - 15 Aug 2025
Viewed by 216
Abstract
Canopy coverage is a key indicator for judging maize growth and production prediction during the seedling stage. Researchers usually use deep learning methods to estimate canopy coverage from maize images, but fully supervised models usually need pixel-level annotations, which requires lots of manual [...] Read more.
Canopy coverage is a key indicator for judging maize growth and production prediction during the seedling stage. Researchers usually use deep learning methods to estimate canopy coverage from maize images, but fully supervised models usually need pixel-level annotations, which requires lots of manual labor. To overcome this problem, we propose ASLNet (Automatic Scribble Labeling-based Semantic Segmentation Network), a weakly supervised model for image semantic segmentation. We designed a module which could self-generate scribble labels for maize plants in an image. Accordingly, ASLNet was constructed using a collaborative mechanism composed of scribble label generation, pseudo-label guided training, and double-loss joint optimization. The cross-scale contrastive regularization can realize semantic segmentation without manual labels. We evaluated the model for label quality and segmentation accuracy. The results showed that ASLNet generated high-quality scribble labels with stable segmentation performance across different scribble densities. Compared to Scribble4All, ASLNet improved mIoU by 3.15% and outperformed fully and weakly supervised models by 6.6% and 15.28% in segmentation accuracy, respectively. Our works proved that ASLNet could be trained by pseudo-labels and offered a cost-effective approach for canopy coverage estimation at maize’s seedling stage. This research enables the early acquisition of corn growth conditions and the prediction of corn yield. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

29 pages, 6246 KB  
Article
DASeg: A Domain-Adaptive Segmentation Pipeline Using Vision Foundation Models—Earthquake Damage Detection Use Case
by Huili Huang, Andrew Zhang, Danrong Zhang, Max Mahdi Roozbahani and James David Frost
Remote Sens. 2025, 17(16), 2812; https://doi.org/10.3390/rs17162812 - 14 Aug 2025
Viewed by 395
Abstract
Limited labeled imagery and tight response windows hinder the accurate damage quantification for post-disaster assessment. The objective of this study is to develop and evaluate a deep learning-based Domain-Adaptive Segmentation (DASeg) workflow to detect post-disaster damage using limited information [...] Read more.
Limited labeled imagery and tight response windows hinder the accurate damage quantification for post-disaster assessment. The objective of this study is to develop and evaluate a deep learning-based Domain-Adaptive Segmentation (DASeg) workflow to detect post-disaster damage using limited information available shortly after an event. DASeg unifies three Vision Foundation Models in an automatic workflow: fine-tuned DINOv2 supplies attention-based point prompts, fine-tuned Grounding DINO yields open-set box prompts, and a frozen Segment Anything Model (SAM) generates the final masks. In the earthquake-focused case study DASeg-Quake, the pipeline boosts mean Intersection over Union (mIoU) by 9.52% over prior work and 2.10% over state-of-the-art supervised baselines. In a zero-shot setting scenario, DASeg-Quake achieves the mIoU of 75.03% for geo-damage analysis, closely matching expert-level annotations. These results show that DASeg achieves superior workflow enhancement in infrastructure damage segmentation without needing pixel-level annotation, providing a practical solution for early-stage disaster response. Full article
Show Figures

Figure 1

45 pages, 59922 KB  
Article
Machine Learning Applied to Professional Football: Performance Improvement and Results Prediction
by Diego Moya, Christian Tipantuña, Génesis Villa, Xavier Calderón-Hinojosa, Belén Rivadeneira and Robin Álvarez
Mach. Learn. Knowl. Extr. 2025, 7(3), 85; https://doi.org/10.3390/make7030085 - 14 Aug 2025
Viewed by 938
Abstract
This paper examines the integration of machine learning (ML) techniques in professional football, focusing on two key areas: (i) player and team performance, and (ii) match outcome prediction. Using a systematic methodology, this study reviews 172 papers from a five-year observation period (2019–2024) [...] Read more.
This paper examines the integration of machine learning (ML) techniques in professional football, focusing on two key areas: (i) player and team performance, and (ii) match outcome prediction. Using a systematic methodology, this study reviews 172 papers from a five-year observation period (2019–2024) to identify relevant applications, focusing on the analysis of game actions (free kicks, passes, and penalties), individual and collective performance, and player position. A predominance of supervised learning, deep learning, and hybrid models (which integrate several ML techniques) is observed in the ML categories. Among the most widely used algorithms are decision trees, extreme gradient boosting, and artificial neural networks, which focus on optimizing sports performance and predicting outcomes. This paper discusses challenges such as the limited availability of public datasets due to access and cost restrictions, the restricted use of advanced visualization tools, and the poor integration of data acquisition devices, such as sensors. However, it also highlights the role of ML in addressing these challenges, thereby representing future research opportunities. Furthermore, this paper includes two illustrative case studies: (i) predicting the date Cristiano Ronaldo will reach 1000 goals, and (ii) an example of predicting penalty shoots; these examples demonstrate the practical potential of ML for performance monitoring and tactical decision-making in real-world football environments. Full article
Show Figures

Figure 1

21 pages, 5025 KB  
Article
Cascaded Self-Supervision to Advance Cardiac MRI Segmentation in Low-Data Regimes
by Martin Urschler, Elisabeth Rechberger, Franz Thaler and Darko Štern
Bioengineering 2025, 12(8), 872; https://doi.org/10.3390/bioengineering12080872 - 12 Aug 2025
Viewed by 519
Abstract
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical [...] Read more.
Deep learning has shown remarkable success in medical image analysis over the last decade; however, many contributions focused on supervised methods which learn exclusively from labeled training samples. Acquiring expert-level annotations in large quantities is time-consuming and costly, even more so in medical image segmentation, where annotations are required on a pixel level and often in 3D. As a result, available labeled training data and consequently performance is often limited. Frequently, however, additional unlabeled data are available and can be readily integrated into model training, paving the way for semi- or self-supervised learning (SSL). In this work, we investigate popular SSL strategies in more detail, namely Transformation Consistency, Student–Teacher and Pseudo-Labeling, as well as exhaustive combinations thereof. We comprehensively evaluate these methods on two 2D and 3D cardiac Magnetic Resonance datasets (ACDC, MMWHS) for which several different multi-compartment segmentation labels are available. To assess performance in limited dataset scenarios, different setups with a decreasing amount of patients in the labeled dataset are investigated. We identify cascaded Self-Supervision as the best methodology, where we propose to employ Pseudo-Labeling and a self-supervised cascaded Student–Teacher model simultaneously. Our evaluation shows that in all scenarios, all investigated SSL methods outperform the respective low-data supervised baseline as well as state-of-the-art self-supervised approaches. This is most prominent in the very-low-labeled data regime, where for our proposed method we demonstrate 10.17% and 6.72% improvement in Dice Similarity Coefficient (DSC) for ACDC and MMWHS, respectively, compared with the low-data supervised approach, as well as 2.47% and 7.64% DSC improvement, respectively, when compared with related work. Moreover, in most experiments, our proposed method is able to greatly decrease the performance gap when compared to the fully supervised scenario, where all available labeled samples are used. We conclude that it is always beneficial to incorporate unlabeled data in cardiac MRI segmentation whenever it is present. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

43 pages, 5258 KB  
Article
Twin Self-Supervised Learning Framework for Glaucoma Diagnosis Using Fundus Images
by Suguna Gnanaprakasam and Rolant Gini John Barnabas
Appl. Syst. Innov. 2025, 8(4), 111; https://doi.org/10.3390/asi8040111 - 11 Aug 2025
Viewed by 275
Abstract
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but [...] Read more.
Glaucoma is a serious eye condition that damages the optic nerve and affects the transmission of visual information to the brain. It is the second leading cause of blindness worldwide. With deep learning, CAD systems have shown promising results in diagnosing glaucoma but mostly rely on small-labeled datasets. Annotated fundus image datasets improve deep learning predictions by aiding pattern identification but require extensive curation. In contrast, unlabeled fundus images are more accessible. The proposed method employs a semi-supervised learning approach to utilize both labeled and unlabeled data effectively. It follows traditional supervised training with the generation of pseudo-labels for unlabeled data, and incorporates self-supervised techniques that eliminate the need for manual annotation. It uses a twin self-supervised learning approach to improve glaucoma diagnosis by integrating pseudo-labels from one model into another self-supervised model for effective detection. The self-supervised patch-based exemplar CNN generates pseudo-labels in the first stage. These pseudo-labeled data, combined with labeled data, train a convolutional auto-encoder classification model in the second stage to identify glaucoma features. A support vector machine classifier handles the final classification of glaucoma in the model, achieving 98% accuracy and 0.98 AUC on the internal, same-source combined fundus image datasets. Also, the model maintains reasonably good generalization to the external (fully unseen) data, achieving AUC of 0.91 on the CRFO dataset and AUC of 0.87 on the Papilla dataset. These results demonstrate the method’s effectiveness, robustness, and adaptability in addressing limited labeled fundus data and aid in improved health and lifestyle. Full article
Show Figures

Figure 1

Back to TopTop