Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (19,527)

Search Parameters:
Keywords = deep neural-network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3358 KB  
Article
MultiScaleSleepNet: A Hybrid CNN–BiLSTM–Transformer Architecture with Multi-Scale Feature Representation for Single-Channel EEG Sleep Stage Classification
by Cenyu Liu, Qinglin Guan, Wei Zhang, Liyang Sun, Mengyi Wang, Xue Dong and Shuogui Xu
Sensors 2025, 25(20), 6328; https://doi.org/10.3390/s25206328 (registering DOI) - 13 Oct 2025
Abstract
Accurate automatic sleep stage classification from single-channel EEG remains challenging due to the need for effective extraction of multiscale neurophysiological features and modeling of long-range temporal dependencies. This study aims to address these limitations by developing an efficient and compact deep learning architecture [...] Read more.
Accurate automatic sleep stage classification from single-channel EEG remains challenging due to the need for effective extraction of multiscale neurophysiological features and modeling of long-range temporal dependencies. This study aims to address these limitations by developing an efficient and compact deep learning architecture tailored for wearable and edge device applications. We propose MultiScaleSleepNet, a hybrid convolutional neural network–bidirectional long short-term memory–transformer architecture that extracts multiscale temporal and spectral features through parallel convolutional branches, followed by sequential modeling using a BiLSTM memory network and transformer-based attention mechanisms. The model obtained an accuracy, macro-averaged F1 score, and kappa coefficient of 88.6%, 0.833, and 0.84 on the Sleep-EDF dataset; 85.6%, 0.811, and 0.80 on the Sleep-EDF Expanded dataset; and 84.6%, 0.745, and 0.79 on the SHHS dataset. Ablation studies indicate that attention mechanisms and spectral fusion consistently improve performance, with the most notable gains observed for stages N1, N3, and rapid eye movement. MultiScaleSleepNet demonstrates competitive performance across multiple benchmark datasets while maintaining a compact size of 1.9 million parameters, suggesting robustness to variations in dataset size and class distribution. The study supports the feasibility of real-time, accurate sleep staging from single-channel EEG using parameter-efficient deep models suitable for portable systems. Full article
(This article belongs to the Special Issue AI on Biomedical Signal Sensing and Processing for Health Monitoring)
19 pages, 4437 KB  
Article
Research on Preventive Maintenance Technology for Highway Cracks Based on Digital Image Processing
by Zhi Chen, Zhuozhuo Bai, Xinqi Chen and Jiuzeng Wang
Electronics 2025, 14(20), 4017; https://doi.org/10.3390/electronics14204017 (registering DOI) - 13 Oct 2025
Abstract
Cracks are the initial manifestation of various diseases on highways. Preventive maintenance of cracks can delay the degree of pavement damage and effectively extend the service life of highways. However, existing crack detection methods have poor performance in identifying small cracks and are [...] Read more.
Cracks are the initial manifestation of various diseases on highways. Preventive maintenance of cracks can delay the degree of pavement damage and effectively extend the service life of highways. However, existing crack detection methods have poor performance in identifying small cracks and are unable to calculate crack width, leading to unsatisfactory preventive maintenance results. This article proposes an integrated method for crack detection, segmentation, and width calculation based on digital image processing technology. Firstly, based on convolutional neural network, a optimized crack detection network called CFSSE is proposed by fusing the fast spatial pyramid pooling structure with the squeeze-and-excitation attention mechanism, with an average detection accuracy of 97.10%, average recall rate of 98.00%, and average detection precision at 0.5 threshold of 98.90%; it outperforms the YOLOv5-mobileone network and YOLOv5-s network. Secondly, based on the U-Net network, an optimized crack segmentation network called CBU_Net is proposed by using the CNN-block structure in the encoder module and a bicubic interpolation algorithm in the decoder module, with an average segmentation accuracy of 99.10%, average intersection over union of 88.62%, and average pixel accuracy of 93.56%; it outperforms the U_Net network, DeepLab v3+ network, and optimized DeepLab v3 network. Finally, a laser spot center positioning method based on information entropy combination is proposed to provide an accurate benchmark for crack width calculation based on parallel lasers, with an average error in crack width calculation of less than 2.56%. Full article
14 pages, 3067 KB  
Article
The Phenomenon of Temperature Increase in Poland: A Machine Learning Approach to Understanding Patterns and Projections
by Anna Franczyk and Robert Twardosz
Appl. Sci. 2025, 15(20), 10994; https://doi.org/10.3390/app152010994 (registering DOI) - 13 Oct 2025
Abstract
This study presents an analysis of patterns in mean monthly air temperature increases in Poland using the deep learning model Neural Basis Expansion Analysis for Time Series (N-BEATS) algorithm. The dataset comprises mean monthly temperatures recorded between 1951 and 2024 at eight meteorological [...] Read more.
This study presents an analysis of patterns in mean monthly air temperature increases in Poland using the deep learning model Neural Basis Expansion Analysis for Time Series (N-BEATS) algorithm. The dataset comprises mean monthly temperatures recorded between 1951 and 2024 at eight meteorological stations across Poland. The research was conducted in two phases. In the first phase, the 74-year period was divided into two distinct intervals: one characterized by relative temperature stability, and the other by a marked upward trend. In the second phase, the N-BEATS neural network was employed to extract temporal patterns directly from the data and to forecast future temperature values. The results confirm the capacity of machine learning methods to identify persistent climate trends and demonstrate their utility for long-term monitoring and prediction. Full article
(This article belongs to the Section Environmental Sciences)
23 pages, 2839 KB  
Article
Risk Prediction of Shipborne Aircraft Landing Based on Deep Learning
by Hao Nian, Xiuquan Deng, Zhipeng Bai and Xingjie Wu
Aerospace 2025, 12(10), 922; https://doi.org/10.3390/aerospace12100922 (registering DOI) - 13 Oct 2025
Abstract
Shipborne fighters play a critical role in far-sea operations. However, their landing process on aircraft carrier decks involves significant risks, where accidents can lead to substantial losses. Timely and accurate risk prediction is, therefore, essential for improving flight training efficiency and enhancing the [...] Read more.
Shipborne fighters play a critical role in far-sea operations. However, their landing process on aircraft carrier decks involves significant risks, where accidents can lead to substantial losses. Timely and accurate risk prediction is, therefore, essential for improving flight training efficiency and enhancing the combat capability of naval aviation forces. Machine-learning algorithms have been explored for predicting landing risks in land-based aircraft. However, owing to the challenges in acquiring relevant data, the application of such methods to shipborne aircraft remains limited. To address this gap, the present study proposes a deep learning-based method for predicting landing risks of shipborne aircraft. A dataset was constructed using simulated ship movements recorded during the sliding phase along with relevant flight parameters. Model training and prediction were conducted using up to ten different input combinations with artificial neural networks, long short-term memory, and transformer neural networks. Experimental results demonstrate that all three models can effectively predict landing parameters, with the lowest average test error reaching 3.5620. The study offers a comprehensive comparison of traditional machine learning and deep learning methods, providing practical insights into input variable selection and model performance evaluation. Although deep learning models, particularly the Transformer, achieved the highest accuracy, in practical applications, the support of hardware performance still needs to be fully considered. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

21 pages, 2783 KB  
Article
Deep Learning-Based Eye-Writing Recognition with Improved Preprocessing and Data Augmentation Techniques
by Kota Suzuki, Abu Saleh Musa Miah and Jungpil Shin
Sensors 2025, 25(20), 6325; https://doi.org/10.3390/s25206325 (registering DOI) - 13 Oct 2025
Abstract
Eye-tracking technology enables communication for individuals with muscle control difficulties, making it a valuable assistive tool. Traditional systems rely on electrooculography (EOG) or infrared devices, which are accurate but costly and invasive. While vision-based systems offer a more accessible alternative, they have not [...] Read more.
Eye-tracking technology enables communication for individuals with muscle control difficulties, making it a valuable assistive tool. Traditional systems rely on electrooculography (EOG) or infrared devices, which are accurate but costly and invasive. While vision-based systems offer a more accessible alternative, they have not been extensively explored for eye-writing recognition. Additionally, the natural instability of eye movements and variations in writing styles result in inconsistent signal lengths, which reduces recognition accuracy and limits the practical use of eye-writing systems. To address these challenges, we propose a novel vision-based eye-writing recognition approach that utilizes a webcam-captured dataset. A key contribution of our approach is the introduction of a Discrete Fourier Transform (DFT)-based length normalization method that standardizes the length of each eye-writing sample while preserving essential spectral characteristics. This ensures uniformity in input lengths and improves both efficiency and robustness. Moreover, we integrate a hybrid deep learning model that combines 1D Convolutional Neural Networks (CNN) and Temporal Convolutional Networks (TCN) to jointly capture spatial and temporal features of eye-writing. To further improve model robustness, we incorporate data augmentation and initial-point normalization techniques. The proposed system was evaluated using our new webcam-captured Arabic numbers dataset and two existing benchmark datasets, with leave-one-subject-out (LOSO) cross-validation. The model achieved accuracies of 97.68% on the new dataset, 94.48% on the Japanese Katakana dataset, and 98.70% on the EOG-captured Arabic numbers dataset—outperforming existing systems. This work provides an efficient eye-writing recognition system, featuring robust preprocessing techniques, a hybrid deep learning model, and a new webcam-captured dataset. Full article
19 pages, 1801 KB  
Article
Enhancing Lemon Leaf Disease Detection: A Hybrid Approach Combining Deep Learning Feature Extraction and mRMR-Optimized SVM Classification
by Ahmet Saygılı
Appl. Sci. 2025, 15(20), 10988; https://doi.org/10.3390/app152010988 (registering DOI) - 13 Oct 2025
Abstract
This study presents a robust and extensible hybrid classification framework for accurately detecting diseases in citrus leaves by integrating transfer learning-based deep learning models with classical machine learning techniques. Features were extracted using advanced pretrained architectures—DenseNet201, ResNet50, MobileNetV2, and EfficientNet-B0—and refined via the [...] Read more.
This study presents a robust and extensible hybrid classification framework for accurately detecting diseases in citrus leaves by integrating transfer learning-based deep learning models with classical machine learning techniques. Features were extracted using advanced pretrained architectures—DenseNet201, ResNet50, MobileNetV2, and EfficientNet-B0—and refined via the minimum redundancy maximum relevance (mRMR) method to reduce redundancy while maximizing discriminative power. These features were classified using support vector machines (SVMs), ensemble bagged trees, k-nearest neighbors (kNNs), and neural networks under stratified 10-fold cross-validation. On the lemon dataset, the best configuration (DenseNet201 + SVM) achieved 94.1 ± 4.9% accuracy, 93.2 ± 5.7% F1 score, and a balanced accuracy of 93.4 ± 6.0%, demonstrating strong and stable performance. To assess external generalization, the same pipeline was applied to mango and pomegranate leaves, achieving 100.0 ± 0.0% and 98.7 ± 1.5% accuracy, respectively—confirming the model’s robustness across citrus and non-citrus domains. Beyond accuracy, lightweight models such as EfficientNet-B0 and MobileNetV2 provided significantly higher throughput and lower latency, underscoring their suitability for real-time agricultural applications. These findings highlight the importance of combining deep representations with efficient classical classifiers for precision agriculture, offering both high diagnostic accuracy and practical deployability in field conditions. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
32 pages, 6841 KB  
Article
Integration of UAV and Remote Sensing Data for Early Diagnosis and Severity Mapping of Diseases in Maize Crop Through Deep Learning and Reinforcement Learning
by Jerry Gao, Krinal Gujarati, Meghana Hegde, Padmini Arra, Sejal Gupta and Neeraja Buch
Remote Sens. 2025, 17(20), 3427; https://doi.org/10.3390/rs17203427 (registering DOI) - 13 Oct 2025
Abstract
Accurate and timely prediction of diseases in water-intensive crops is critical for sustainable agriculture and food security. AI-based crop disease management tools are essential for an optimized approach, as they offer significant potential for enhancing yield and sustainability. This study centers on maize, [...] Read more.
Accurate and timely prediction of diseases in water-intensive crops is critical for sustainable agriculture and food security. AI-based crop disease management tools are essential for an optimized approach, as they offer significant potential for enhancing yield and sustainability. This study centers on maize, training deep learning models on UAV imagery and satellite remote-sensing data to detect and predict disease. The performance of multiple convolutional neural networks, such as ResNet-50, DenseNet-121, etc., is evaluated by their ability to classify maize diseases such as Northern Leaf Blight, Gray Leaf Spot, Common Rust, and Blight using UAV drone data. Remotely sensed MODIS satellite data was used to generate spatial severity maps over a uniform grid by implementing time-series modeling. Furthermore, reinforcement learning techniques were used to identify hotspots and prioritize the next locations for inspection by analyzing spatial and temporal patterns, identifying critical factors that affect disease progression, and enabling better decision-making. The integrated pipeline automates data ingestion and delivers farm-level condition views without manual uploads. The combination of multiple remotely sensed data sources leads to an efficient and scalable solution for early disease detection. Full article
Show Figures

Figure 1

14 pages, 1932 KB  
Article
Development and Validation of Transformer- and Convolutional Neural Network-Based Deep Learning Models to Predict Curve Progression in Adolescent Idiopathic Scoliosis
by Shinji Takahashi, Shota Ichikawa, Kei Watanabe, Haruki Ueda, Hideyuki Arima, Yu Yamato, Takumi Takeuchi, Naobumi Hosogane, Masashi Okamoto, Manami Umezu, Hiroki Oba, Yohan Kondo and Shoji Seki
J. Clin. Med. 2025, 14(20), 7216; https://doi.org/10.3390/jcm14207216 (registering DOI) - 13 Oct 2025
Abstract
Background/Objectives: The clinical management of adolescent idiopathic scoliosis (AIS) is hindered by the inability to accurately predict curve progression. Although skeletal maturity and the initial Cobb angle are established predictors of progression, their combined predictive accuracy remains limited. This study aimed to [...] Read more.
Background/Objectives: The clinical management of adolescent idiopathic scoliosis (AIS) is hindered by the inability to accurately predict curve progression. Although skeletal maturity and the initial Cobb angle are established predictors of progression, their combined predictive accuracy remains limited. This study aimed to develop a robust and interpretable artificial intelligence (AI) system using deep learning (DL) models to predict the progression of scoliosis using only standing frontal radiographs. Methods: We conducted a multicenter study involving 542 patients with AIS. After excluding 52 borderline progression cases (6–9° progression in the Cobb angle), 294 and 196 patients were assigned to progression (≥10° increase) and non-progression (≤5° increase) groups, respectively, considering a 2-year follow-up. Frontal whole spinal radiographs were preprocessed using histogram equalization and divided into two regions of interest (ROIs) (ROI 1, skull base–femoral head; ROI 2, C7–iliac crest). Six pretrained DL models, including convolutional neural networks (CNNs) and transformer-based models, were trained on the radiograph images. Gradient-weighted class activation mapping (Grad-CAM) was further performed for model interpretation. Results: Ensemble models outperformed individual ones, with the average ensemble model achieving area under the curve (AUC) values of 0.769 for ROI 1 and 0.755 for ROI 2. Grad-CAM revealed that the CNNs tended to focus on the local curve apex, whereas the transformer-based models demonstrated global attention across the spine, ribs, and pelvis. Models trained on ROI 2 performed comparably with respect to those using ROI 1, supporting the feasibility of image standardization without a loss of accuracy. Conclusions: This study establishes the clinical potential of transformer-based DL models for predicting the progression of scoliosis using only plain radiographs. Our multicenter approach, high AUC values, and interpretable architectures support the integration of AI into clinical decision-making for the early treatment of AIS. Full article
(This article belongs to the Special Issue Clinical New Insights into Management of Scoliosis)
Show Figures

Figure 1

31 pages, 1305 KB  
Review
Artificial Intelligence in Cardiac Electrophysiology: A Clinically Oriented Review with Engineering Primers
by Giovanni Canino, Assunta Di Costanzo, Nadia Salerno, Isabella Leo, Mario Cannataro, Pietro Hiram Guzzi, Pierangelo Veltri, Sabato Sorrentino, Salvatore De Rosa and Daniele Torella
Bioengineering 2025, 12(10), 1102; https://doi.org/10.3390/bioengineering12101102 - 13 Oct 2025
Abstract
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level [...] Read more.
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level performance in rhythm classification and prognostic estimation on standard ECGs, with a reported arrhythmia classification accuracy of ≥95% and an atrial fibrillation detection sensitivity/specificity of ≥96%. The application of AI to wearable devices enables population-scale screening and digital triage pathways. In the electrophysiology (EP) laboratory, AI standardizes the interpretation of intracardiac electrograms (EGMs) and supports target selection, and machine learning (ML)-guided strategies have improved ablation outcomes. In patients with cardiac implantable electronic devices (CIEDs), remote monitoring feeds multiparametric models capable of anticipating heart-failure decompensation and arrhythmic risk. This review outlines the principal modeling paradigms of supervised learning (regression models, support vector machines, neural networks, and random forests) and unsupervised learning (clustering, dimensionality reduction, association rule learning) and examines emerging technologies in electrophysiology (digital twins, physics-informed neural networks, DL for imaging, graph neural networks, and on-device AI). However, major challenges remain for clinical translation, including an external validation rate below 30% and workflow integration below 20%, which represent core obstacles to real-world adoption. A joint clinical engineering roadmap is essential to translate prototypes into reliable, bedside tools. Full article
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)
Show Figures

Figure 1

20 pages, 1250 KB  
Article
Symmetric 3D Convolutional Network with Uncertainty Estimation for MRI-Based Striatal DaT-Uptake Assessment in Parkinson’s Disease
by Walid Abdullah Al, Il Dong Yun and Yun Jung Bae
Appl. Sci. 2025, 15(20), 10977; https://doi.org/10.3390/app152010977 - 13 Oct 2025
Abstract
Dopamine transporter (DaT) imaging is commonly used for monitoring Parkinson’s disease (PD), where the amount of striatal DaT uptake serves as the PD severity indicator. MRI of the nigral region has recently emerged as a safer and more available alternative. This work introduces [...] Read more.
Dopamine transporter (DaT) imaging is commonly used for monitoring Parkinson’s disease (PD), where the amount of striatal DaT uptake serves as the PD severity indicator. MRI of the nigral region has recently emerged as a safer and more available alternative. This work introduces a 3D convolutional network-based symmetric regressor for predicting the DaT-uptake amount from nigral MRI patches. Unlike the typical deep networks, the proposed model leverages the lateral symmetry between right and left nigrae by incorporating a paired input–output architecture that concurrently predicts DaT uptakes for both the right and left striata, while employing a symmetric loss that constrains the difference between right-to-left predictions. To improve model reliability, we also propose a symmetric Monte Carlo dropout strategy for providing fruitful uncertainty estimates about the prediction. Evaluated on 734 3D nigral patches, our symmetric regressor demonstrated a 12.11% improvement in prediction error compared to standard deep-learning models. Furthermore, the reliability was enhanced, resulting in a 5% reduction in the prediction uncertainty interval at a 95% coverage probability for the true DaT-uptake amount. Our findings demonstrate that integrating structural symmetry into model design is a powerful strategy for achieving accurate and reliable predictions for PD severity analysis. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

23 pages, 3359 KB  
Article
Capsule Neural Networks with Bayesian Optimization for Pediatric Pneumonia Detection from Chest X-Ray Images
by Szymon Salamon and Wojciech Książek
J. Clin. Med. 2025, 14(20), 7212; https://doi.org/10.3390/jcm14207212 (registering DOI) - 13 Oct 2025
Abstract
Background: Pneumonia in children poses a serious threat to life and health, making early detection critically important. In this regard, artificial intelligence methods can provide valuable support. Methods: Capsule networks and Bayesian optimization are modern techniques that were employed to build effective models [...] Read more.
Background: Pneumonia in children poses a serious threat to life and health, making early detection critically important. In this regard, artificial intelligence methods can provide valuable support. Methods: Capsule networks and Bayesian optimization are modern techniques that were employed to build effective models for predicting pneumonia from chest X-ray images. The medical images underwent essential preprocessing, were divided into training, validation, and testing sets, and were subsequently used to develop the models. Results: The designed capsule neural network model with Bayesian optimization achieved the following final results: an accuracy of 95.1%, sensitivity of 98.9%, specificity of 85.4%, precision (PPV) of 94.8%, negative predictive value (NPV) of 96.2%, F1-score of 96.8%, and a Matthews correlation coefficient (MCC) of 0.877. In addition, the model was complemented with an explainability analysis using Grad-CAM, which demonstrated that its predictions rely predominantly on clinically relevant pulmonary regions. Conclusions: The proposed model demonstrates high accuracy and shows promise for potential use in clinical practice. It may also be applied to other tasks in medical image analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence and Deep Learning in Medical Imaging)
Show Figures

Figure 1

22 pages, 7434 KB  
Article
A Lightweight Image-Based Decision Support Model for Marine Cylinder Lubrication Based on CNN-ViT Fusion
by Qiuyu Li, Guichen Zhang and Enrui Zhao
J. Mar. Sci. Eng. 2025, 13(10), 1956; https://doi.org/10.3390/jmse13101956 - 13 Oct 2025
Abstract
Under the context of “Energy Conservation and Emission Reduction,” low-sulfur fuel has become widely adopted in maritime operations, posing significant challenges to cylinder lubrication systems. Traditional oil injection strategies, heavily reliant on manual experience, suffer from instability and high costs. To address this, [...] Read more.
Under the context of “Energy Conservation and Emission Reduction,” low-sulfur fuel has become widely adopted in maritime operations, posing significant challenges to cylinder lubrication systems. Traditional oil injection strategies, heavily reliant on manual experience, suffer from instability and high costs. To address this, a lightweight image retrieval model for cylinder lubrication is proposed, leveraging deep learning and computer vision to support oiling decisions based on visual features. The model comprises three components: a backbone network, a feature enhancement module, and a similarity retrieval module. Specifically, EfficientNetB0 serves as the backbone for efficient feature extraction under low computational overhead. MobileViT Blocks are integrated to combine local feature perception of Convolutional Neural Networks (CNNs) with the global modeling capacity of Transformers. To further improve receptive field and multi-scale representation, Receptive Field Blocks (RFB) are introduced between the components. Additionally, the Convolutional Block Attention Module (CBAM) attention mechanism enhances focus on salient regions, improving feature discrimination. A high-quality image dataset was constructed using WINNING’s large bulk carriers under various sea conditions. The experimental results demonstrate that the EfficientNetB0 + RFB + MobileViT + CBAM model achieves excellent performance with minimal computational cost: 99.71% Precision, 99.69% Recall, and 99.70% F1-score—improvements of 11.81%, 15.36%, and 13.62%, respectively, over the baseline EfficientNetB0. With only a 0.3 GFLOP and 8.3 MB increase in model size, the approach balances accuracy and inference efficiency. The model also demonstrates good robustness and application stability in real-world ship testing, with potential for further adoption in the field of intelligent ship maintenance. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 2445 KB  
Article
Image-Based Deep Learning Approach for Drilling Kick Risk Prediction
by Wei Liu, Yuansen Wei, Jiasheng Fu, Qihao Li, Yi Zou, Tao Pan and Zhaopeng Zhu
Processes 2025, 13(10), 3251; https://doi.org/10.3390/pr13103251 (registering DOI) - 13 Oct 2025
Abstract
As oil and gas exploration and development advance into deep and ultra-deep areas, kick accidents are becoming more frequent during drilling operations, posing a serious threat to construction safety. Traditional kick monitoring methods are limited in their multivariate coupling modeling. These models rely [...] Read more.
As oil and gas exploration and development advance into deep and ultra-deep areas, kick accidents are becoming more frequent during drilling operations, posing a serious threat to construction safety. Traditional kick monitoring methods are limited in their multivariate coupling modeling. These models rely too heavily on single-feature weights, making them prone to misjudgment. Therefore, this paper proposes a drilling kick risk prediction method based on image modality. First, a sliding window mechanism is used to slice key drilling parameters in time series to extract multivariate data for continuous time periods. Second, data processing is performed to construct joint logging curve image samples. Then, classical CNN models such as VGG16 and ResNet are used to train and classify image samples; finally, the performance of the model on a number of indicators is evaluated and compared with different CNN and temporal neural network models. Finally, the model’s performance is evaluated across multiple metrics and compared with CNN and time series neural network models of different structures. Experimental results show that the image-based VGG16 model outperforms typical convolutional neural network models such as AlexNet, ResNet, and EfficientNet in overall performance, and significantly outperforms LSTM and GRU time series models in classification accuracy and comprehensive discriminative power. Compared to LSTM, the recall rate increased by 23.8% and the precision increased by 5.8%, demonstrating that its convolutional structure possesses stronger perception and discriminative capabilities in extracting local spatiotemporal features and recognizing patterns, enabling more accurate identification of kick risks. Furthermore, the pre-trained VGG16 model achieved an 8.69% improvement in accuracy compared to the custom VGG16 model, fully demonstrating the effectiveness and generalization advantages of transfer learning in small-sample engineering problems and providing feasibility support for model deployment and engineering applications. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

24 pages, 3527 KB  
Article
Machine Learning-Based Validation of LDHC and SLC35G2 Methylation as Epigenetic Biomarkers for Food Allergy
by Sabire Kiliçarslan, Meliha Merve Hiz Çiçekliyurt, Serhat Kiliçarslan, Dina S. M. Hassan, Nagwan Abdel Samee and Ahmet Kurtoglu
Biomedicines 2025, 13(10), 2489; https://doi.org/10.3390/biomedicines13102489 (registering DOI) - 13 Oct 2025
Abstract
Background: Food allergies represent a growing global health concern, yet the current diagnostic methods often fail to distinguish between true allergies and food sensitivities, leading to misdiagnoses and inadequate treatment. Epigenetic alterations, such as DNA methylation (DNAm), may offer novel biomarkers for precise [...] Read more.
Background: Food allergies represent a growing global health concern, yet the current diagnostic methods often fail to distinguish between true allergies and food sensitivities, leading to misdiagnoses and inadequate treatment. Epigenetic alterations, such as DNA methylation (DNAm), may offer novel biomarkers for precise diagnosis. Methods: This study employed a computational machine learning framework integrated with DNAm data to identify potential biomarkers and enhance diagnostic accuracy. Differential methylation analysis was performed using the limma package to identify informative CpG features, which were then analyzed with advanced algorithms, including SVM (polynomial and RBF kernels), k-NN, Random Forest, and artificial neural networks (ANN). Deep learning via a stacked autoencoder (SAE) further enriched the analysis by uncovering epigenetic patterns and reducing feature dimensionality. To ensure robustness, the identified biomarkers were independently validated using the external dataset GSE114135. Results: The hybrid machine learning models revealed LDHC and SLC35G2 methylation as promising biomarkers for food allergy prediction. Notably, the methylation pattern of the LDHC gene showed significant potential in distinguishing individuals with food allergies from those with food sensitivity. Additionally, the integration of machine learning and deep learning provided a robust platform for analyzing complex epigenetic data. Importantly, validation on GSE114135 confirmed the reproducibility and reliability of these findings across independent cohorts. Conclusions: This study demonstrates the potential of combining machine learning with DNAm data to advance precision medicine in food allergy diagnosis. The results highlight LDHC and SLC35G2 as robust epigenetic biomarkers, validated across two independent datasets (GSE114134 and GSE114135). These findings underscore the importance of developing clinical tests that incorporate these biomarkers to reduce misdiagnosis and lay the groundwork for exploring epigenetic regulation in allergic diseases. Full article
(This article belongs to the Section Molecular Genetics and Genetic Diseases)
Show Figures

Figure 1

29 pages, 2757 KB  
Article
Non-Contrast Brain CT Images Segmentation Enhancement: Lightweight Pre-Processing Model for Ultra-Early Ischemic Lesion Recognition and Segmentation
by Aleksei Samarin, Alexander Savelev, Aleksei Toropov, Aleksandra Dozortseva, Egor Kotenko, Artem Nazarenko, Alexander Motyko, Galiya Narova, Elena Mikhailova and Valentin Malykh
J. Imaging 2025, 11(10), 359; https://doi.org/10.3390/jimaging11100359 (registering DOI) - 13 Oct 2025
Abstract
Timely identification and accurate delineation of ultra-early ischemic stroke lesions in non-contrast computed tomography (CT) scans of the human brain are of paramount importance for prompt medical intervention and improved patient outcomes. In this study, we propose a deep learning-driven methodology specifically designed [...] Read more.
Timely identification and accurate delineation of ultra-early ischemic stroke lesions in non-contrast computed tomography (CT) scans of the human brain are of paramount importance for prompt medical intervention and improved patient outcomes. In this study, we propose a deep learning-driven methodology specifically designed for segmenting ultra-early ischemic regions, with a particular emphasis on both the ischemic core and the surrounding penumbra during the initial stages of stroke progression. We introduce a lightweight preprocessing model based on convolutional filtering techniques, which enhances image clarity while preserving the structural integrity of medical scans, a critical factor when detecting subtle signs of ultra-early ischemic strokes. Unlike conventional preprocessing methods that directly modify the image and may introduce artifacts or distortions, our approach ensures the absence of neural network-induced artifacts, which is especially crucial for accurate diagnosis and segmentation of ultra-early ischemic lesions. The model employs predefined differentiable filters with trainable parameters, allowing for artifact-free and precision-enhanced image refinement tailored to the challenges of ultra-early stroke detection. In addition, we incorporated into the combined preprocessing pipeline a newly proposed trainable linear combination of pretrained image filters, a concept first introduced in this study. For model training and evaluation, we utilize a publicly available dataset of acute ischemic stroke cases, focusing on the subset relevant to ultra-early stroke manifestations, which contains annotated non-contrast CT brain scans from 112 patients. The proposed model demonstrates high segmentation accuracy for ultra-early ischemic regions, surpassing existing methodologies across key performance metrics. The results have been rigorously validated on test subsets from the dataset, confirming the effectiveness of our approach in supporting the early-stage diagnosis and treatment planning for ultra-early ischemic strokes. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

Back to TopTop