Loading [MathJax]/jax/output/HTML-CSS/jax.js
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (301)

Search Parameters:
Keywords = preprocessing pipeline

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 28786 KiB  
Article
Text-Conditioned Diffusion-Based Synthetic Data Generation for Turbine Engine Sensor Analysis and RUL Estimation
by Luis Pablo Mora-de-León, David Solís-Martín, Juan Galán-Páez and Joaquín Borrego-Díaz
Machines 2025, 13(5), 374; https://doi.org/10.3390/machines13050374 - 30 Apr 2025
Viewed by 302
Abstract
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into [...] Read more.
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into three components, mapped to the RGB channels of an image. These components, combined with engine identifiers and cycle information, form compact 19 × 19 × 3 pixel images, later scaled to 512 × 512 × 3 pixels. A variational autoencoder (VAE)-based diffusion model, fine-tuned on these images, leverages text prompts describing engine characteristics to generate high-quality synthetic samples. A reverse transformation pipeline reconstructs synthetic images back into time-series signals, preserving the original engine-specific attributes while removing padding artifacts. The quality of the synthetic data is assessed by training Remaining Useful Life (RUL) estimation models and comparing performance across original, synthetic, and combined datasets. Results demonstrate that synthetic data can be beneficial for model training, particularly in the early epochs when working with limited datasets. Compared to existing approaches, which rely on generative adversarial networks (GANs) or deterministic transformations, the proposed framework offers enhanced data fidelity and adaptability. This study highlights the potential of text-conditioned diffusion models for augmenting time-series datasets in industrial Prognostics and Health Management (PHM) applications. Full article
(This article belongs to the Section Turbomachinery)
Show Figures

Figure 1

14 pages, 4526 KiB  
Data Descriptor
A Complementary Dataset of Scalp EEG Recordings Featuring Participants with Alzheimer’s Disease, Frontotemporal Dementia, and Healthy Controls, Obtained from Photostimulation EEG
by Aimilia Ntetska, Andreas Miltiadous, Markos G. Tsipouras, Katerina D. Tzimourta, Theodora Afrantou, Panagiotis Ioannidis, Dimitrios G. Tsalikakis, Konstantinos Sakkas, Emmanouil D. Oikonomou, Nikolaos Grigoriadis, Pantelis Angelidis, Nikolaos Giannakeas and Alexandros T. Tzallas
Data 2025, 10(5), 64; https://doi.org/10.3390/data10050064 - 29 Apr 2025
Viewed by 135
Abstract
Research interest in the application of electroencephalogram (EEG) as a non-invasive diagnostic tool for the automated detection of neurodegenerative diseases is growing. Open-access datasets have become crucial for researchers developing such methodologies. Our previously published open-access dataset of resting-state (eyes-closed) EEG recordings from [...] Read more.
Research interest in the application of electroencephalogram (EEG) as a non-invasive diagnostic tool for the automated detection of neurodegenerative diseases is growing. Open-access datasets have become crucial for researchers developing such methodologies. Our previously published open-access dataset of resting-state (eyes-closed) EEG recordings from patients with Alzheimer’s disease (AD), frontotemporal dementia (FTD), and cognitively normal (CN) controls has attracted significant attention. In this paper, we present a complementary dataset consisting of eyes-open photic stimulation recordings from the same cohort. The dataset includes recordings from 88 participants (36 AD, 23 FTD, and 29 CN) and is provided in Brain Imaging Data Structure (BIDS) format, promoting consistency and ease of use across research groups. Additionally, a fully preprocessed version is included, using EEGLAB-based pipelines that involve filtering, artifact removal, and Independent Component Analysis, preparing the data for machine learning applications. This new dataset enables the study of brain responses to visual stimulation across different cognitive states and supports the development and validation of automated classification algorithms for dementia detection. It offers a valuable benchmark for both methodological comparisons and biological investigations, and it is expected to significantly contribute to the fields of neurodegenerative disease research, biomarker discovery, and EEG-based diagnostics. Full article
(This article belongs to the Special Issue Benchmarking Datasets in Bioinformatics, 2nd Edition)
Show Figures

Figure 1

22 pages, 46829 KiB  
Article
Waveshift 2.0: An Improved Physics-Driven Data Augmentation Strategy in Fine-Grained Image Classification
by Gent Imeraj and Hitoshi Iyatomi
Electronics 2025, 14(9), 1735; https://doi.org/10.3390/electronics14091735 - 24 Apr 2025
Viewed by 190
Abstract
This paper presents Waveshift Augmentation 2.0 (WS 2.0), an enhanced version of the previously proposed Waveshift Augmentation (WS 1.0), a novel data augmentation technique inspired by light propagation dynamics in optical systems. While WS 1.0 introduced phase-based wavefront transformations under the assumption of [...] Read more.
This paper presents Waveshift Augmentation 2.0 (WS 2.0), an enhanced version of the previously proposed Waveshift Augmentation (WS 1.0), a novel data augmentation technique inspired by light propagation dynamics in optical systems. While WS 1.0 introduced phase-based wavefront transformations under the assumption of an infinitesimally small aperture, WS 2.0 incorporates an additional aperture-dependent hyperparameter that models real-world optical attenuation. This refinement enables broader frequency modulation and greater diversity in image transformations while preserving compatibility with well-established data augmentation pipelines such as CLAHE, AugMix, and RandAugment. Evaluated across a wide range of tasks, including medical imaging, fine-grained object recognition, and grayscale image classification, WS 2.0 consistently outperformed both WS 1.0 and standard geometric augmentation. Notably, when benchmarked against geometric augmentation alone, it achieved average macro-F1 improvements of +1.48 (EfficientNetV2), +0.65 (ConvNeXt), and +0.73 (Swin Transformer), with gains of up to +9.32 points in medical datasets. These results demonstrate that WS 2.0 advances physics-based augmentation by enhancing generalization without sacrificing modularity or preprocessing efficiency, offering a scalable and realistic augmentation strategy for complex imaging domains. Full article
(This article belongs to the Special Issue New Trends in Computer Vision and Image Processing)
Show Figures

Figure 1

30 pages, 2587 KiB  
Systematic Review
Towards Fair AI: Mitigating Bias in Credit Decisions—A Systematic Literature Review
by José Rômulo de Castro Vieira, Flavio Barboza, Daniel Cajueiro and Herbert Kimura
J. Risk Financial Manag. 2025, 18(5), 228; https://doi.org/10.3390/jrfm18050228 - 24 Apr 2025
Viewed by 445
Abstract
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used [...] Read more.
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used to identify and mitigate biases in AI models applied to credit granting. We conducted a systematic literature review using the IEEE, Scopus, Web of Science, and Science Direct databases, covering the period from 1 January 2013 to 1 October 2024. From the 414 identified articles, 34 were selected for detailed analysis. Most studies are empirical and quantitative, focusing on fairness in outcomes and biases present in datasets. Preprocessing techniques dominated as the approach for bias mitigation, often relying on public academic datasets. Gender and race were the most studied sensitive attributes, with statistical parity being the most commonly used fairness metric. The findings reveal a maturing research landscape that prioritizes fairness in model outcomes and the mitigation of biases embedded in historical data. However, only a quarter of the papers report more than one fairness metric, limiting comparability across approaches. The literature remains largely focused on a narrow set of sensitive attributes, with little attention to intersectionality or alternative sources of bias. Furthermore, no study employed causal inference techniques to identify proxy discrimination. Despite some promising results—where fairness gains exceed 30% with minimal accuracy loss—significant methodological gaps persist, including the lack of standardized metrics, overreliance on legacy data, and insufficient transparency in model pipelines. Future work should prioritize developing advanced bias mitigation methods, exploring sensitive attributes, standardizing fairness metrics, improving model explainability, reducing computational complexity, enhancing synthetic data generation, and addressing the legal and ethical challenges of algorithms. Full article
(This article belongs to the Section Risk)
Show Figures

Figure 1

16 pages, 3628 KiB  
Article
A Gene Ontology-Based Pipeline for Selecting Significant Gene Subsets in Biomedical Applications
by Sergii Babichev, Oleg Yarema, Igor Liakh and Nataliia Shumylo
Appl. Sci. 2025, 15(8), 4471; https://doi.org/10.3390/app15084471 - 18 Apr 2025
Viewed by 239
Abstract
The growing volume and complexity of gene expression data necessitate biologically meaningful and statistically robust methods for feature selection to enhance the effectiveness of disease diagnosis systems. The present study addresses this challenge by proposing a pipeline that integrates RNA-seq data preprocessing, differential [...] Read more.
The growing volume and complexity of gene expression data necessitate biologically meaningful and statistically robust methods for feature selection to enhance the effectiveness of disease diagnosis systems. The present study addresses this challenge by proposing a pipeline that integrates RNA-seq data preprocessing, differential gene expression analysis, Gene Ontology (GO) enrichment, and ensemble-based machine learning. The pipeline employs the non-parametric Kruskal–Wallis test to identify differentially expressed genes, followed by dual enrichment analysis using both Fisher’s exact test and the Kolmogorov–Smirnov test across three GO categories: Biological Process (BP), Molecular Function (MF), and Cellular Component (CC). Genes associated with GO terms found significant by both tests were used to construct multiple gene subsets, including subsets based on individual categories, their union, and their intersection. Classification experiments using a random forest model, validated via 5-fold cross-validation, demonstrated that gene subsets derived from the CC category and the union of all categories achieved the highest accuracy and weighted F1-scores, exceeding 0.97 across 14 cancer types. In contrast, subsets derived from BP, MF, and especially their intersection exhibited lower performance. These results confirm the discriminative power of spatially localized gene annotations and underscore the value of integrating statistical and functional information into gene selection. The proposed approach improves the reliability of biomarker discovery and supports downstream analyses such as clustering and biclustering, providing a strong foundation for developing precise diagnostic tools in personalized medicine. Full article
(This article belongs to the Special Issue Advances in Bioinformatics and Biomedical Engineering)
Show Figures

Figure 1

16 pages, 4230 KiB  
Article
Automatic Adaptive Weld Seam Width Control Method for Long-Distance Pipeline Ring Welds
by Yi Zhang, Shaojie Wu and Fangjie Cheng
Sensors 2025, 25(8), 2483; https://doi.org/10.3390/s25082483 - 15 Apr 2025
Viewed by 292
Abstract
In pipeline all-position welding processes, laser scanning provides critical geometric data of width-changing bevel morphology for welding torch swing control, yet conventional second-order derivative zero methods often yield pseudo-inflection points in practical applications. To address this, a third-order derivative weighted average threshold algorithm [...] Read more.
In pipeline all-position welding processes, laser scanning provides critical geometric data of width-changing bevel morphology for welding torch swing control, yet conventional second-order derivative zero methods often yield pseudo-inflection points in practical applications. To address this, a third-order derivative weighted average threshold algorithm was developed, integrating image denoising, enhancement, and segmentation pre-processing with cubic spline fitting for precise bevel contour reconstruction. Bevel pixel points were captured by the laser sensor as inputs through the extracted second-order derivative eigenvalues to derive third-order derivative features, applying weighted threshold discrimination to accurately identify inflection points. Dual-angle sensors were implemented to synchronize laser-detected bevel geometry with real-time torch swing adjustments. Experimental results demonstrate that the system achieves a steady-state error of only 1.645% at the maximum swing width, a dynamic response time below 50 ms, and torch center trajectory tracking errors strictly constrained within ±0.1 mm. Compared to conventional methods, the proposed algorithm improves dynamic performance by 20.6% and exhibits unique adaptability to narrow-gap V-grooves. The results of these studies confirmed the ability of the method to provide real-time, accurate control for variable-width weld tracking, forming a swing-width adaptive control system. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 37932 KiB  
Article
Combined L-Band Polarimetric SAR and GPR Data to Develop Models for Leak Detection in the Water Pipeline Networks
by Yuyao Zhang, Hongliang Guan and Fuzhou Duan
Remote Sens. 2025, 17(8), 1386; https://doi.org/10.3390/rs17081386 - 14 Apr 2025
Viewed by 405
Abstract
Water pipeline leak detection in a fast and accurate way is of much importance for water utility companies and the general public. At present, the rapid development of remote sensing and computer technologies makes it possible to detect water pipeline leaks on a [...] Read more.
Water pipeline leak detection in a fast and accurate way is of much importance for water utility companies and the general public. At present, the rapid development of remote sensing and computer technologies makes it possible to detect water pipeline leaks on a large scale efficiently and timely. The leakage will cause an increase in the water content and dielectric constant of the soil around the pipeline, so it is feasible to determine the leakage site by measuring the subsurface soil relative dielectric constant (SSRDC). In this paper, we combine the SAOCOM-1A L-band synthetic-aperture radar (SAR) and the ground-penetrating radar (GPR) data to develop regression models that predict the SSRDC values. The model features are selected with the Boruta wrapper algorithm based on the SAOCOM-1A images after pre-processing, and the SSRDC values at sampling locations within the research area are calculated with the reflected wave method based on the GPR data. We evaluate multiple linear regression (MLR), random forest (RF), and multi-layer perceptron neural network (MLPNN) models for their ability to predict the SSRDC values using the selected features. The experimental results show that the MLPNN model (R2 = 0.705, RMSE = 1.936, MAE = 1.664) can better estimate the SSRDC values. Further, in the main urban area of Tianjin, China, which has a large water pipeline system, the SSDRC values of the area are obtained with the best model, and the locations where the predicted SSDRC values exceeded a certain threshold were considered potential leak locations. The empirical results indicate an encouraging potential of the proposed method to locate the pipeline leaks. This will provide a new avenue for the monitoring and treatment of water pipeline leaks. Full article
Show Figures

Figure 1

13 pages, 3466 KiB  
Article
A Multimodal CNN–Transformer Network for Gait Pattern Recognition with Wearable Sensors in Weak GNSS Scenarios
by Jiale Wang, Nanzhu Liu, Yuxin Xie, Shengmao Que and Ming Xia
Electronics 2025, 14(8), 1537; https://doi.org/10.3390/electronics14081537 - 10 Apr 2025
Viewed by 315
Abstract
Human motion recognition is crucial for applications like navigation, health monitoring, and smart healthcare, especially in weak GNSS scenarios. Current methods face challenges such as limited sensor diversity and inadequate feature extraction. This study proposes a CNN–Transformer–Attention framework with multimodal enhancement to address [...] Read more.
Human motion recognition is crucial for applications like navigation, health monitoring, and smart healthcare, especially in weak GNSS scenarios. Current methods face challenges such as limited sensor diversity and inadequate feature extraction. This study proposes a CNN–Transformer–Attention framework with multimodal enhancement to address these challenges. We first designed a lightweight wearable system integrating synchronized accelerometer, gyroscope, and magnetometer modules at wrist, chest, and foot positions, enabling multi-dimensional biomechanical data acquisition. A hybrid preprocessing pipeline combining cubic spline interpolation, adaptive Kalman filtering, and spectral analysis was developed to extract discriminative spatiotemporal-frequency features. The core architecture employs parallel CNN pathways for local sensor feature extraction and Transformer-based attention layers to model global temporal dependencies across body positions. Experimental validation on 12 motion patterns demonstrated 98.21% classification accuracy, outperforming single-sensor configurations by 0.43–7.98% and surpassing conventional models (BP-Network, CNN, LSTM, Transformer, KNN) through effective cross-modal fusion. The framework also exhibits improved generalization with 3.2–8.7% better accuracy in cross-subject scenarios, providing a robust solution for human activity recognition and accurate positioning in challenging environments such as autonomous navigation and smart cities. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

19 pages, 842 KiB  
Article
Robust IoT Activity Recognition via Stochastic and Deep Learning
by Xuewei Wang, Shihao Wang, Xiaoxi Zhang and Chunsheng Li
Appl. Sci. 2025, 15(8), 4166; https://doi.org/10.3390/app15084166 - 10 Apr 2025
Viewed by 223
Abstract
In the evolving landscape of Internet of Things (IoT) applications, human activity recognition plays an important role in domains such as health monitoring, elderly care, sports training, and smart environments. However, current approaches face significant challenges: sensor data are often noisy and variable, [...] Read more.
In the evolving landscape of Internet of Things (IoT) applications, human activity recognition plays an important role in domains such as health monitoring, elderly care, sports training, and smart environments. However, current approaches face significant challenges: sensor data are often noisy and variable, leading to difficulties in reliable feature extraction and accurate activity identification; furthermore, ensuring data integrity and user privacy remains an ongoing concern in real-world deployments. To address these challenges, we propose a novel framework that synergizes advanced statistical signal processing with state-of-the-art machine learning and deep learning models. Our approach begins with a rigorous preprocessing pipeline—encompassing filtering and normalization—to enhance data quality, followed by the application of probability density functions and key statistical measures to capture intrinsic sensor characteristics. We then employ a hybrid modeling strategy combining traditional methods (SVM, Decision Tree, and Random Forest) and deep learning architectures (CNN, LSTM, Transformer, Swin Transformer, and TransUNet) to achieve high recognition accuracy and robustness. Additionally, our framework incorporates IoT security measures designed to safeguard data integrity and privacy, marking a significant advancement over existing methods in both efficiency and effectiveness. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1479 KiB  
Article
Rosette Trajectory MRI Reconstruction with Vision Transformers
by Muhammed Fikret Yalcinbas, Cengizhan Ozturk, Onur Ozyurt, Uzay E. Emir and Ulas Bagci
Tomography 2025, 11(4), 41; https://doi.org/10.3390/tomography11040041 - 1 Apr 2025
Viewed by 381
Abstract
Introduction: An efficient pipeline for rosette trajectory magnetic resonance imaging reconstruction is proposed, combining the inverse Fourier transform with a vision transformer (ViT) network enhanced with a convolutional layer. This method addresses the challenges of reconstructing high-quality images from non-Cartesian data by leveraging [...] Read more.
Introduction: An efficient pipeline for rosette trajectory magnetic resonance imaging reconstruction is proposed, combining the inverse Fourier transform with a vision transformer (ViT) network enhanced with a convolutional layer. This method addresses the challenges of reconstructing high-quality images from non-Cartesian data by leveraging the ViT’s ability to handle complex spatial dependencies without extensive preprocessing. Materials and Methods: The inverse fast Fourier transform provides a robust initial approximation, which is refined by the ViT network to produce high-fidelity images. Results and Discussion: This approach outperforms established deep learning techniques for normalized root mean squared error, peak signal-to-noise ratio, and entropy-based image quality scores; offers better runtime performance; and remains competitive with respect to other metrics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

21 pages, 5405 KiB  
Article
Optimization of the Canopy Three-Dimensional Reconstruction Method for Intercropped Soybeans and Early Yield Prediction
by Xiuni Li, Menggen Chen, Shuyuan He, Xiangyao Xu, Panxia Shao, Yahan Su, Lingxiao He, Jia Qiao, Mei Xu, Yao Zhao, Wenyu Yang, Wouter H. Maes and Weiguo Liu
Agriculture 2025, 15(7), 729; https://doi.org/10.3390/agriculture15070729 - 28 Mar 2025
Viewed by 245
Abstract
Intercropping is a key cultivation strategy for safeguarding national food and oil security. Accurate early-stage yield prediction of intercropped soybeans is essential for the rapid screening and breeding of high-yield soybean varieties. As a widely used technique for crop yield estimation, the accuracy [...] Read more.
Intercropping is a key cultivation strategy for safeguarding national food and oil security. Accurate early-stage yield prediction of intercropped soybeans is essential for the rapid screening and breeding of high-yield soybean varieties. As a widely used technique for crop yield estimation, the accuracy of 3D reconstruction models directly affects the reliability of yield predictions. This study focuses on optimizing the 3D reconstruction process for intercropped soybeans to efficiently extract canopy structural parameters throughout the entire growth cycle, thereby enhancing the accuracy of early yield prediction. To achieve this, we optimized image acquisition protocols by testing four imaging angles (15°, 30°, 45°, and 60°), four plant rotation speeds (0.8 rpm, 1.0 rpm, 1.2 rpm, and 1.4 rpm), and four image acquisition counts (24, 36, 48, and 72 images). Point cloud preprocessing was refined through the application of secondary transformation matrices, color thresholding, statistical filtering, and scaling. Key algorithms—including the convex hull algorithm, voxel method, and 3D α-shape algorithm—were optimized using MATLAB, enabling the extraction of multi-dimensional canopy parameters. Subsequently, a stepwise regression model was developed to achieve precise early-stage yield prediction for soybeans. The study identified optimal image acquisition settings: a 30° imaging angle, a plant rotation speed of 1.2 rpm, and the collection of 36 images during the vegetative stage and 48 images during the reproductive stage. With these improvements, a high-precision 3D canopy point-cloud model of soybeans covering the entire growth period was successfully constructed. The optimized pipeline enabled batch extraction of 23 canopy structural parameters, achieving high accuracy, with linear fitting R2 values of 0.990 for plant height and 0.950 for plant width. Furthermore, the voxel volume-based prediction approach yielded a maximum yield prediction accuracy of R2 = 0.788. This study presents an integrated 3D reconstruction framework, spanning image acquisition, point cloud generation, and structural parameter extraction, effectively enabling early and precise yield prediction for intercropped soybeans. The proposed method offers an efficient and reliable technical reference for acquiring 3D structural information of soybeans in strip intercropping systems and contributes to the accurate identification of soybean germplasm resources, providing substantial theoretical and practical value. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

28 pages, 3778 KiB  
Article
Dermatological Health: A High-Performance, Embedded, and Distributed System for Real-Time Facial Skin Problem Detection
by Mehdi Pirahandeh
Electronics 2025, 14(7), 1319; https://doi.org/10.3390/electronics14071319 - 26 Mar 2025
Viewed by 241
Abstract
The real-time detection of facial skin problems is crucial for improving dermatological health, yet its practical implementation remains challenging. Early detection and timely intervention can significantly enhance skin health while reducing the financial burden associated with traditional dermatological treatments. This paper introduces EM-YOLO, [...] Read more.
The real-time detection of facial skin problems is crucial for improving dermatological health, yet its practical implementation remains challenging. Early detection and timely intervention can significantly enhance skin health while reducing the financial burden associated with traditional dermatological treatments. This paper introduces EM-YOLO, an advanced deep learning framework designed for embedded and distributed environments, leveraging improvements in YOLO models (versions 5, 7, and 8) for high-performance, real-time skin condition detection. The proposed architecture incorporates custom layers, including Squeeze-and-Excitation Block (SEB), Depthwise Separable Convolution (DWC), and Residual Dropout Block (RDB), to optimize feature extraction, enhance model robustness, and improve computational efficiency for deployment in resource-constrained settings. The proposed EM-YOLO model architecture clearly delineates the role of each architectural component, including preprocessing, detection, and postprocessing phases, ensuring a structured and modular representation of the detection pipeline. Extensive experiments demonstrate that EM-YOLO significantly outperforms traditional YOLO models in detecting facial skin conditions such as acne, dark circles, enlarged pores, and wrinkles. The proposed model achieves a precision of 82.30%, recall of 71.50%, F1-score of 76.40%, and mAP@0.5 of 68.80%, which are 23.52%, 32.7%, 29.34%, and 24.68% higher than standard YOLOv8, respectively. Furthermore, the enhanced YOLOv8 custom layers significantly improve system efficiency, achieving a request rate of 15 Req/s with an end-to-end latency of 0.315 s and an average processing latency of 0.021 s, demonstrating 51.61% faster inference and 200% improved throughput compared to traditional SCAS systems. These results highlight EM-YOLO’s superior precision, robustness, and efficiency, making it a highly effective solution for real-time dermatological detection tasks in embedded and distributed computing environments. Full article
(This article belongs to the Special Issue Recent Advances of Software Engineering)
Show Figures

Figure 1

22 pages, 7442 KiB  
Article
Optimizing Depression Classification Using Combined Datasets and Hyperparameter Tuning with Optuna
by Ștefana Duță and Alina Elena Sultana
Sensors 2025, 25(7), 2083; https://doi.org/10.3390/s25072083 - 26 Mar 2025
Viewed by 351
Abstract
This research focuses on the depression states classification of EEG signals using the EEGNet model optimized with Optuna. The purpose was to increase model performance by combining data from healthy and depressed subjects, which ensured model robustness across datasets. The methodology comprised the [...] Read more.
This research focuses on the depression states classification of EEG signals using the EEGNet model optimized with Optuna. The purpose was to increase model performance by combining data from healthy and depressed subjects, which ensured model robustness across datasets. The methodology comprised the construction of a preprocessing pipeline, which included noise filtering, artifact removal, and signal segmentation. Additive extraction from time and frequency domains further captured important features of EEG signals. The model was developed on a merged dataset (DepressionRest and MDD vs. Control) and evaluated on an independent dataset, 93.27% (±0.0610) accuracy with a 34.16 KB int8 model, ideal for portable EEG diagnostics. These results are promising in terms of model performance and depression state-of-the-art classification accuracy. The results suggest that the hyperparameter-optimized Optuna model performs adequately to cope with the variability of real-world data. Furthermore, the model will need improvement before generalization to other datasets, such as the DepressionRest dataset, can be realized. The research identifies the advantages of EEGNet models and optimization using Optuna for clinical diagnostics, with remarkable performance for deployed real-world models. Future work includes the incorporation of the model into portable clinical systems while ensuring compatibility with current EEG devices, as well as the continuous improvement of model performance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

45 pages, 10045 KiB  
Article
An Automated Framework for Streamlined CFD-Based Design and Optimization of Fixed-Wing UAV Wings
by Chris Pliakos, Giorgos Efrem, Dimitrios Terzis and Pericles Panagiotou
Algorithms 2025, 18(4), 186; https://doi.org/10.3390/a18040186 - 24 Mar 2025
Viewed by 583
Abstract
The increasing complexity of the UAV aerodynamic design, imposed by novel configurations and requirements, has highlighted the need for efficient tools for high-fidelity simulation, especially for optimization purposes. The current work presents an automated CFD framework, tailored for fixed-wing UAVs, designed to streamline [...] Read more.
The increasing complexity of the UAV aerodynamic design, imposed by novel configurations and requirements, has highlighted the need for efficient tools for high-fidelity simulation, especially for optimization purposes. The current work presents an automated CFD framework, tailored for fixed-wing UAVs, designed to streamline the geometry generation of wings, mesh creation, and simulation execution into a Python-based pipeline. The framework employs a parameterized meshing module capable of handling a broad range of wing geometries within an extensive design space, thereby reducing manual effort and achieving pre-processing times in the order of five minutes. Incorporating GPU-enabled solvers and high-performance computing environments allows for rapid and scalable aerodynamic evaluations. An automated methodology for assessing the CFD results is presented, addressing the discretization and iterative errors, as well as grid resolution, especially near wall surfaces. Comparisons with the results produced by a specialized mechanical engineer with over five years of experience in aircraft-related CFD indicate high accuracy, with deviations below 3% for key aerodynamic metrics. A large-scale deployment further demonstrates consistency across diverse wing samples. A Bayesian Optimization case study then illustrates the framework’s utility, identifying a wing design with an 8% improvement in the lift-to-drag ratio, while maintaining an average y+ value below 1 along the surface. Overall, the proposed approach streamlines fixed-wing UAV design processes and supports advanced aerodynamic optimization and data generation. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
Show Figures

Figure 1

20 pages, 3410 KiB  
Article
An Efficient Convolutional Neural Network Accelerator Design on FPGA Using the Layer-to-Layer Unified Input Winograd Architecture
by Jie Li, Yong Liang, Zhenhao Yang and Xinhai Li
Electronics 2025, 14(6), 1182; https://doi.org/10.3390/electronics14061182 - 17 Mar 2025
Viewed by 531
Abstract
Convolutional Neural Networks (CNNs) have found widespread applications in artificial intelligence fields such as computer vision and edge computing. However, as input data dimensionality and convolutional model depth continue to increase, deploying CNNs on edge and embedded devices faces significant challenges, including high [...] Read more.
Convolutional Neural Networks (CNNs) have found widespread applications in artificial intelligence fields such as computer vision and edge computing. However, as input data dimensionality and convolutional model depth continue to increase, deploying CNNs on edge and embedded devices faces significant challenges, including high computational demands, excessive hardware resource consumption, and prolonged computation times. In contrast, the Decomposable Winograd Method (DWM), which decomposes large-size or large-stride kernels into smaller kernels, provides a more efficient solution for inference acceleration in resource-constrained environments. This work proposes an approach employing the layer-to-layer unified input transformation based on the Decomposable Winograd Method. This reduces computational complexity in the feature transformation unit through system-level parallel pipelining and operation reuse. Additionally, we introduce a reconfigurable, column-indexed Winograd computation unit design to minimize hardware resource consumption. We also design flexible data access patterns to support efficient computation. Finally, we propose a preprocessing shift network system that enables low-latency data access and dynamic selection of the Winograd computation unit. Experimental evaluations on VGG-16 and ResNet-18 networks demonstrate that our accelerator, deployed on the Xilinx XC7Z045 platform, achieves an average throughput of 683.26 GOPS. Compared to existing approaches, the design improves DSP efficiency (GOPS/DSPs) by 5.8×. Full article
Show Figures

Figure 1

Back to TopTop