Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (657)

Search Parameters:
Keywords = Gaussian similarity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 5024 KB  
Article
A Study on Geometrical Consistency of Surfaces Using Partition-Based PCA and Wavelet Transform in Classification
by Vignesh Devaraj, Thangavel Palanisamy and Kanagasabapathi Somasundaram
AppliedMath 2025, 5(4), 134; https://doi.org/10.3390/appliedmath5040134 - 3 Oct 2025
Abstract
The proposed study explores the consistency of the geometrical character of surfaces under scaling, rotation and translation. In addition to its mathematical significance, it also exhibits advantages over image processing and economic applications. In this paper, the authors used partition-based principal component analysis [...] Read more.
The proposed study explores the consistency of the geometrical character of surfaces under scaling, rotation and translation. In addition to its mathematical significance, it also exhibits advantages over image processing and economic applications. In this paper, the authors used partition-based principal component analysis similar to two-dimensional Sub-Image Principal Component Analysis (SIMPCA), along with a suitably modified atypical wavelet transform in the classification of 2D images. The proposed framework is further extended to three-dimensional objects using machine learning classifiers. To strengthen fairness, we benchmarked against both Random Forest (RF) and Support Vector Machine (SVM) classifiers using nested cross-validation, showing consistent gains when TIFV is included. In addition, we carried out a robustness analysis by introducing Gaussian noise to the intensity channel, confirming that TIFV degrades much more gracefully compared to traditional descriptors. Experimental results demonstrate that the method achieves improved performance compared to traditional hand-crafted descriptors such as measured values and histogram of oriented gradients. In addition, it is found to be useful that this proposed algorithm is capable of establishing consistency locally, which is never possible without partition. However, a reasonable amount of computational complexity is reduced. We note that comparisons with deep learning baselines are beyond the scope of this study, and our contribution is positioned within the domain of interpretable, affine-invariant descriptors that enhance classical machine learning pipelines. Full article
Show Figures

Figure 1

25 pages, 6100 KB  
Article
UAV Image Denoising and Its Impact on Performance of Object Localization and Classification in UAV Images
by Rostyslav Tsekhmystro, Vladimir Lukin and Dmytro Krytskyi
Computation 2025, 13(10), 234; https://doi.org/10.3390/computation13100234 - 3 Oct 2025
Abstract
Unmanned aerial vehicles (UAVs) have become a tool for solving numerous practical tasks. UAV sensors provide images and videos for on-line or off-line data processing for object localization, classification, and tracking due to the use of trained convolutional neural networks (CNNs) and artificial [...] Read more.
Unmanned aerial vehicles (UAVs) have become a tool for solving numerous practical tasks. UAV sensors provide images and videos for on-line or off-line data processing for object localization, classification, and tracking due to the use of trained convolutional neural networks (CNNs) and artificial intelligence. However, quality of images acquired by UAV-based sensors is not always perfect due to many factors. One of them could be noise arising because of several reasons. Its presence, especially if noise is intensive, can make significantly worse the performance characteristics of CNN-based techniques of object localization and classification. We analyze such degradation for a set of eleven modern CNNs for additive white Gaussian noise model and study when (for what noise intensity and for what CNN) the performance reduction becomes essential and, thus, special means to improve it become desired. Representatives of two most popular families, namely the block matching 3-dimensional (BM3D) filter and DRUNet denoiser, are employed to enhance images under condition of a priori known noise properties. It is shown that, due to preliminary denoising, the CNN performance characteristics can be significantly improved up to almost the same level as for the noise-free images without CNN retraining. Performance is analyzed using several criteria typical for image denoising, object localization and classification. Examples of object localization and classification are presented demonstrating possible object missing due to noise. Computational efficiency is also taken into account. Using a large set of test data, it is demonstrated that: (1) the best results are usually provided for SSD Mobilenet V2 and VGG16 networks; (2) the performance characteristics for cases of applying BM3D filter and DRUNet denoiser are similar but the use of DRUNet is preferable since it provides slightly better results. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

21 pages, 4397 KB  
Article
Splatting the Cat: Efficient Free-Viewpoint 3D Virtual Try-On via View-Decomposed LoRA and Gaussian Splatting
by Chong-Wei Wang, Hung-Kai Huang, Tzu-Yang Lin, Hsiao-Wei Hu and Chi-Hung Chuang
Electronics 2025, 14(19), 3884; https://doi.org/10.3390/electronics14193884 - 30 Sep 2025
Abstract
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and [...] Read more.
As Virtual Try-On (VTON) technology matures, 2D VTON methods based on diffusion models can now rapidly generate diverse and high-quality try-on results. However, with rising user demands for realism and immersion, many applications are shifting towards 3D VTON, which offers superior geometric and spatial consistency. Existing 3D VTON approaches commonly face challenges such as barriers to practical deployment, substantial memory requirements, and cross-view inconsistencies. To address these issues, we propose an efficient 3D VTON framework with robust multi-view consistency, whose core design is to decouple the monolithic 3D editing task into a four-stage cascade as follows: (1) We first reconstruct an initial 3D scene using 3D Gaussian Splatting, integrating the SMPL-X model at this stage as a strong geometric prior. By computing a normal-map loss and a geometric consistency loss, we ensure the structural stability of the initial human model across different views. (2) We employ the lightweight CatVTON to generate 2D try-on images, that provide visual guidance for the subsequent personalized fine-tuning tasks. (3) To accurately represent garment details from all angles, we partition the 2D dataset into three subsets—front, side, and back—and train a dedicated LoRA module for each subset on a pre-trained diffusion model. This strategy effectively mitigates the issue of blurred details that can occur when a single model attempts to learn global features. (4) An iterative optimization process then uses the generated 2D VTON images and specialized LoRA modules to edit the 3DGS scene, achieving 360-degree free-viewpoint VTON results. All our experiments were conducted on a single consumer-grade GPU with 24 GB of memory, a significant reduction from the 32 GB or more typically required by previous studies under similar data and parameter settings. Our method balances quality and memory requirement, significantly lowering the adoption barrier for 3D VTON technology. Full article
(This article belongs to the Special Issue 2D/3D Industrial Visual Inspection and Intelligent Image Processing)
Show Figures

Figure 1

27 pages, 7020 KB  
Article
RPC Correction Coefficient Extrapolation for KOMPSAT-3A Imagery in Inaccessible Regions
by Namhoon Kim
Remote Sens. 2025, 17(19), 3332; https://doi.org/10.3390/rs17193332 - 29 Sep 2025
Abstract
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for [...] Read more.
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for inaccessible segments from an upstream calibration subset. Terrain-independent RPCs were regenerated and residual image-space errors were modeled with weighted least squares using elapsed time, off-nadir evolution, and morphometric descriptors of the target terrain. Gaussian kernel weights favor calibration scenes with a Jarque–Bera-indexed relief similar to the target. When applied to three KOMPSAT-3A panchromatic strips, the approach preserves native scene geometry while transporting calibrated coefficients downstream, reducing positional errors in two strips to <2.8 pixels (~2.0 m at 0.710 m Ground Sample Distance, GSD). The first strip with a stronger attitude drift retains 4.589 pixel along-track errors, indicating the need for wider predictor coverage under aggressive maneuvers. The results clarify the directional error structure with a near-constant across-track bias and low-frequency along-track drift and show that a compact predictor set can stabilize extrapolation without full-block adjustment or dense tie networks. This provides a GCP-efficient alternative to full-block adjustment and enables accurate georeferencing in controlled environments. Full article
Show Figures

Figure 1

21 pages, 40899 KB  
Article
Optimizing the Layout of Primary Healthcare Facilities in Harbin’s Main Urban Area, China: A Resilience Perspective
by Bingbing Wang and Ming Sun
Sustainability 2025, 17(19), 8706; https://doi.org/10.3390/su17198706 - 27 Sep 2025
Abstract
Under the dual backdrop of the Healthy China strategy and the concept of sustainable development, optimizing the spatial layout of primary healthcare facilities is important for fairly distributing healthcare resources and strengthening the resilience of the public health system in a sustainable way. [...] Read more.
Under the dual backdrop of the Healthy China strategy and the concept of sustainable development, optimizing the spatial layout of primary healthcare facilities is important for fairly distributing healthcare resources and strengthening the resilience of the public health system in a sustainable way. This study introduces an innovative 3D spatial resilience evaluation framework, covering transmission (service accessibility), diversity (facility type matching), and stability (supply demand balance). Unlike traditional accessibility studies, the concept of “resilience” here highlights a system’s ability to adapt to sudden public health events through spatial reorganization, contrasting sharply with vulnerable systems that lack resilience. Method-wise, the study uses an improved Gaussian two-step floating catchment area method (Ga2SFCA) to measure spatial accessibility, applies a geographically weighted regression model (GWR) to analyze spatial heterogeneity factors, combines network analysis tools to assess service coverage efficiency, and uses spatial overlay analysis to identify areas with supply demand imbalances. Harbin is located in northeastern China and is the capital of Heilongjiang Province. Since Harbin is a typical central city in the northeast region, with a large population and clear regional differences, it was chosen as the case study. The case study in Harbin’s main urban area shows clear spatial differences in medical accessibility. Daoli, Nangang, and Xiangfang form a highly accessible cluster, while Songbei and Daowai show clear service gaps. The GWR model reveals that population density and facility density are key factors driving differences in service accessibility. LISA cluster analysis identifies two typical hot spots with supply demand imbalances: northern Xiangfang and southern Songbei. Finally, based on these findings, recommendations are made to increase appropriate-level medical facilities, offering useful insights for fine-tuning the spatial layout of basic healthcare facilities in similar large cities. Full article
Show Figures

Figure 1

18 pages, 5562 KB  
Article
Symmetry-Aware Face Illumination Enhancement via Pixel-Adaptive Curve Mapping
by Jieqiong Yang, Yumeng Lu, Jiaqi Liu and Jizheng Yi
Symmetry 2025, 17(9), 1560; https://doi.org/10.3390/sym17091560 - 18 Sep 2025
Viewed by 286
Abstract
Face recognition under uneven illumination conditions presents significant challenges, as asymmetric shadows often obscure facial features while overexposed regions lose critical texture details. To address this problem, a novel symmetry-aware illumination enhancement method named face shadow detection network (FSDN) is proposed, which features [...] Read more.
Face recognition under uneven illumination conditions presents significant challenges, as asymmetric shadows often obscure facial features while overexposed regions lose critical texture details. To address this problem, a novel symmetry-aware illumination enhancement method named face shadow detection network (FSDN) is proposed, which features a nested U-Net architecture combined with Gaussian convolution. This method enables precise illumination intensity maps for the given face images through higher-order quadratic enhancement curves, effectively extending the low-light dynamic range while preserving essential facial symmetry. Comprehensive evaluations on the Extended Yale B and CMU-PIE datasets demonstrate the superiority of the proposed FSDN over conventional approaches, achieving structural similarity (SSIM) indices of 0.48 and 0.59, respectively, along with remarkably low face recognition error rates of 1.3% and 0.2%, respectively. The key innovation of this work lies in its simultaneous optimization of illumination uniformity and facial symmetry preservation, thereby significantly improving face analysis reliability under challenging lighting conditions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 1980 KB  
Article
Optimizing the Artificial Aging Process of Lubricating Oils Contaminated by Alternative Fuel Using Design of Experiments Methodology
by Dominika Pintér and András Lajos Nagy
Lubricants 2025, 13(9), 405; https://doi.org/10.3390/lubricants13090405 - 11 Sep 2025
Viewed by 385
Abstract
This study aimed to develop an experimental method for producing artificially aged oil with properties—such as coefficient of friction, average wear scar diameter, and antiwear additive content—similar to those of used oil contaminated with alternative fuel, sampled after 129 h of engine test [...] Read more.
This study aimed to develop an experimental method for producing artificially aged oil with properties—such as coefficient of friction, average wear scar diameter, and antiwear additive content—similar to those of used oil contaminated with alternative fuel, sampled after 129 h of engine test bench operation. A design of experiment (DoE) methodology was applied to examine the effects of various parameters and identify optimal settings. Friction and wear tests were conducted using an Optimol SRV5 tribometer in a ball-on-disc configuration, while wear scars were analyzed with a Keyence VHX-1000 digital microscope. Oil analysis was conducted with an Anton Paar 3001 viscometer and a Bruker Invenio-S Fourier-transform infrared spectrometer. The DoE results showed that the heating duration had a negligible effect on oil degradation. Aging time primarily affected changes in the friction coefficient and average wear scar diameter, whereas aging temperature was the primary factor influencing the anti-wear additive content. Gaussian elimination identified the optimal aging parameters as 132.8 °C and 103.1 h. These results were confirmed through surface analysis using a ThermoFisher NexsaG2 X-ray photoelectron spectrometer, which showed that the tribofilm composition of the used oil most closely matched that of artificially aged oils prepared at 120 °C for 96 h and 140 °C for 120 h. The strong correlation between the predicted and experimentally confirmed conditions demonstrates the reliability of the proposed method for replicating realistic aging effects in lubricating oils. Full article
Show Figures

Figure 1

22 pages, 2230 KB  
Article
A Load Forecasting Model Based on Spatiotemporal Partitioning and Cross-Regional Attention Collaboration
by Xun Dou, Ruiang Yang, Zhenlan Dou, Chunyan Zhang, Chen Xu and Jiacheng Li
Sustainability 2025, 17(18), 8162; https://doi.org/10.3390/su17188162 - 10 Sep 2025
Viewed by 277
Abstract
With the advancement of new power system construction, thermostatically controlled loads represented by regional air conditioning systems are being extensively integrated into the grid, leading to a surge in the number of user nodes. This large-scale integration of new loads creates challenges for [...] Read more.
With the advancement of new power system construction, thermostatically controlled loads represented by regional air conditioning systems are being extensively integrated into the grid, leading to a surge in the number of user nodes. This large-scale integration of new loads creates challenges for the grid, as the resulting load data exhibits strong periodicity and randomness over time. These characteristics are influenced by factors like temperature and user behavior. At the same time, spatially adjacent nodes show similarities and clustering in electricity usage. This creates complex spatiotemporal coupling features. These complex spatiotemporal characteristics challenge traditional forecasting methods. Their high model complexity and numerous parameters often lead to overfitting or the curse of dimensionality, which hinders both prediction accuracy and efficiency. To address this issue, this paper proposes a load forecasting method based on spatiotemporal partitioning and collaborative cross-regional attention. First, a spatiotemporal similarity matrix is constructed using the Shape Dynamic Time Warping (ShapeDTW) algorithm and an adaptive Gaussian kernel function based on the Haversine distance. Spectral clustering combined with the Gap Statistic criterion is then applied to adaptively determine the optimal number of partitions, dividing all load nodes in the power grid into several sub-regions with homogeneous spatiotemporal characteristics. Second, for each sub-region, a local Spatiotemporal Graph Convolutional Network (STGCN) model is built. By integrating gated temporal convolution with spatial feature extraction, the model accurately captures the spatiotemporal evolution patterns within each sub-region. On this basis, a cross-regional attention mechanism is designed to dynamically learn the correlation weights among sub-regions, enabling collaborative fusion of global features. Finally, the proposed method is evaluated on a multi-node load dataset. The effectiveness of the approach is validated through comparative experiments and ablation studies (that is, by removing key components of the model to evaluate their contribution to the overall performance). Experimental results demonstrate that the proposed method achieves excellent performance in short-term load forecasting tasks across multiple nodes. Full article
(This article belongs to the Special Issue Energy Conservation Towards a Low-Carbon and Sustainability Future)
Show Figures

Figure 1

42 pages, 6378 KB  
Article
Advances in Imputation Strategies Supporting Peak Storm Surge Surrogate Modeling
by WoongHee Jung, Christopher Irwin, Alexandros A. Taflanidis, Norberto C. Nadal-Caraballo, Luke A. Aucoin and Madison C. Yawn
J. Mar. Sci. Eng. 2025, 13(9), 1678; https://doi.org/10.3390/jmse13091678 - 31 Aug 2025
Viewed by 494
Abstract
Surrogate models are widely recognized as effective, data-driven predictive tools for storm surge risk assessment. For such applications, surrogate models (referenced also as emulators or metamodels) are typically developed using existing databases of synthetic storm simulations, and once calibrated can provide fast-to-compute approximations [...] Read more.
Surrogate models are widely recognized as effective, data-driven predictive tools for storm surge risk assessment. For such applications, surrogate models (referenced also as emulators or metamodels) are typically developed using existing databases of synthetic storm simulations, and once calibrated can provide fast-to-compute approximations of the storm surge for a variety of downstream analyses. The storm surge predictions need to be established for different geographic locations of interest, typically corresponding to the computational nodes of the original numerical model. A number of inland nodes will remain dry for some of the database storm scenarios, requiring an imputation for them to estimate the so-called pseudo-surge in support of the surrogate model development. Past work has examined the adoption of kNN (k-nearest neighbor) spatial interpolation for this imputation. The enhancement of kNN with hydraulic connectivity information, using the grid or mesh of the original numerical model, was also previously considered. In this enhancement, neighboring nodes are considered connected only if they are connected within the grid. This work revisits the imputation of peak storm surge within a surrogate modeling context and examines three distinct advancements. First, a response-based correlation concept is considered for the hydraulic connectivity, replacing the previous notion of connectivity using the numerical model grid. Second, a Gaussian Process interpolation (GPI) is examined as alternative spatial imputation strategy, integrating a recently established adaptive covariance tapering scheme for accommodating an efficient implementation for large datasets (large number of nodes). Third, a data completion approach is examined for imputation, treating dry instances as missing data and establishing imputation using probabilistic principal component analysis (PPCA). The combination of spatial imputation with PPCA is also examined. In this instance, spatial imputation is first deployed, followed by PPCA for the nodes that were misclassified in the first stage. Misclassification corresponds to the instances for which imputation provides surge estimates higher than ground elevation, creating the illusion that the node is inundated even though the original predictions correspond to the node being dry. In the illustrative case study, different imputation variants established based on the aforementioned advancements are compared, with comparison metrics corresponding to the predictive accuracy of the surrogate models developed using the imputed databases. Results show that incorporating hydraulic connectivity based on response similarity into kNN enhances the predictive performance, that GPI provides a competitive (to kNN) spatial interpolation approach, and that the combination of data completion and spatial interpolation emerges as the recommended approach. Full article
(This article belongs to the Special Issue Machine Learning in Coastal Engineering)
Show Figures

Figure 1

11 pages, 659 KB  
Article
Spectrum Analysis of Thermally Driven Curvature Inversion in Strained Graphene Ripples for Energy Conversion Applications via Molecular Dynamics
by James M. Mangum, Md R. Kabir, Tamzeed B. Amin, Syed M. Rahman, Ashaduzzaman and Paul M. Thibado
Nanomaterials 2025, 15(17), 1332; https://doi.org/10.3390/nano15171332 - 29 Aug 2025
Cited by 1 | Viewed by 626
Abstract
The extraordinary mechanical flexibility, high electrical conductivity, and nanoscale instability of freestanding graphene make it an excellent candidate for vibration energy harvesting. When freestanding graphene is stretched taut and subject to external forces, it will vibrate like a drum head. Its vibrations occur [...] Read more.
The extraordinary mechanical flexibility, high electrical conductivity, and nanoscale instability of freestanding graphene make it an excellent candidate for vibration energy harvesting. When freestanding graphene is stretched taut and subject to external forces, it will vibrate like a drum head. Its vibrations occur at a fundamental frequency along with higher-order harmonics. Alternatively, when freestanding graphene is compressed, it will arch slightly out of the plane or buckle under the load. Remaining flat under compression would be energetically too costly compared to simple bond rotations. Buckling up or down, also known as ripple formation, naturally creates a bistable situation. When the compressed system vibrates between its two low-energy states, it must pass through the high-energy middle. The greater the compression, the higher the energy barrier. The system can still oscillate but the frequency will drop far below the fundamental drum-head frequency. The low frequencies combined with the large-scale movement and the large number of atoms coherently moving are key factors addressed in this study. Ten ripples with increasing compressive strain were built, and each was studied at five different temperatures. Increasing the temperature has a similar effect as increasing the compressive strain. Analysis of the average time between curvature inversion events allowed us to quantify the energy barrier height. When the low-frequency bistable data were time-averaged, the authors found that the velocity distribution shifts from the expected Gaussian to a heavy-tailed Cauchy (Lorentzian) distribution, which is important for energy harvesting applications. Full article
Show Figures

Figure 1

13 pages, 7032 KB  
Article
Frequency-Domain Gaussian Cooperative Filtering Demodulation Method for Spatially Modulated Full-Polarization Imaging Systems
by Ziyang Zhang, Pengbo Ma, Shixiao Ye, Song Ye, Wei Luo, Shu Li, Wei Xiong, Yuting Zhang, Wentao Zhang, Fangyuan Wang, Jiejun Wang, Xinqiang Wang and Niyan Chen
Photonics 2025, 12(9), 857; https://doi.org/10.3390/photonics12090857 - 26 Aug 2025
Viewed by 393
Abstract
The spatially modulated full-polarization imaging system encodes complete polarization information into a single interferogram, enabling rapid demodulation. However, traditional single Gaussian low-pass filtering cannot adequately suppress crosstalk among Stokes components, leading to reduced accuracy. To address this issue, this paper proposes a frequency-domain [...] Read more.
The spatially modulated full-polarization imaging system encodes complete polarization information into a single interferogram, enabling rapid demodulation. However, traditional single Gaussian low-pass filtering cannot adequately suppress crosstalk among Stokes components, leading to reduced accuracy. To address this issue, this paper proposes a frequency-domain Gaussian cooperative filter (FGCF) based on a divide-and-conquer strategy in the frequency domain. Specifically, the method employs six Gaussian high-pass filters to effectively identify and suppress interference signals located at different positions in the frequency domain, while utilizing a single Gaussian low-pass filter to preserve critical polarization information within the image. Through the cooperative processing of the low-pass filter response and the complementary responses of the high-pass filters, simultaneous optimization of information retention and interference suppression is achieved. Simulation and real-scene experiments show that FGCF significantly enhances demodulation quality, especially for S1, and achieves superior structural similarity compared with traditional low-pass filtering. Full article
Show Figures

Figure 1

26 pages, 389 KB  
Article
Integrating AI with Meta-Language: An Interdisciplinary Framework for Classifying Concepts in Mathematics and Computer Science
by Elena Kramer, Dan Lamberg, Mircea Georgescu and Miri Weiss Cohen
Information 2025, 16(9), 735; https://doi.org/10.3390/info16090735 - 26 Aug 2025
Viewed by 400
Abstract
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from [...] Read more.
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from selected subfields within both disciplines. In particular, we focus on meta-languages—the linguistic tools used to express definitions, axioms, intuitions, and heuristics within a discipline. The primary objective of this research is to identify which subfields of Mathematics and Computer Science share similar meta-languages. Identifying such correspondences may enable the rephrasing of content from less familiar subfields using styles that students already recognize from more familiar areas, thereby enhancing accessibility and comprehension. To pursue this aim, we compiled text corpora from multiple subfields across both disciplines. We compared their meta-languages using a combination of supervised (Neural Network) and unsupervised (clustering) learning methods. Specifically, we applied several clustering algorithms—K-means, Partitioning around Medoids (PAM), Density-Based Clustering, and Gaussian Mixture Models—to analyze inter-discipline similarities. To validate the resulting classifications, we used XLNet, a deep learning model known for its sensitivity to linguistic patterns. The model achieved an accuracy of 78% and an F1-score of 0.944. Our findings show that subfields can be meaningfully grouped based on meta-language similarity, offering valuable insights for tailoring educational content more effectively. To further verify these groupings and explore their pedagogical relevance, we conducted both quantitative and qualitative research involving student participation. This paper presents findings from the qualitative component—namely, a content analysis of semi-structured interviews with software engineering students and lecturers. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

16 pages, 2441 KB  
Article
Federated Hybrid Graph Attention Network with Two-Step Optimization for Electricity Consumption Forecasting
by Hao Yang, Xinwu Ji, Qingchan Liu, Lukun Zeng, Yuan Ai and Hang Dai
Energies 2025, 18(17), 4465; https://doi.org/10.3390/en18174465 - 22 Aug 2025
Viewed by 563
Abstract
Electricity demand forecasting is essential for smart grid management, yet it presents challenges due to the dynamic nature of consumption trends and regional variability in usage patterns. While federated learning (FL) offers a privacy-preserving solution for handling sensitive, region-specific data, traditional FL approaches [...] Read more.
Electricity demand forecasting is essential for smart grid management, yet it presents challenges due to the dynamic nature of consumption trends and regional variability in usage patterns. While federated learning (FL) offers a privacy-preserving solution for handling sensitive, region-specific data, traditional FL approaches struggle when local datasets are limited, often leading models to overfit noisy peak fluctuations. Additionally, many regions exhibit stable, periodic consumption behaviors, further complicating the need for a global model that can effectively capture diverse patterns without overfitting. To address these issues, we propose Federated Hybrid Graph Attention Network with Two-step Optimization for Electricity Consumption Forecasting (FedHMGAT), a hybrid modeling framework designed to balance periodic trends and numerical variations. Specifically, FedHMGAT leverages a numerical structure graph with a Gaussian encoder to model peak fluctuations as dynamic covariance features, mitigating noise-driven overfitting, while a multi-scale attention mechanism captures periodic consumption patterns through hybrid feature representation. These feature components are then fused to produce robust predictions. To enhance global model aggregation, FedHMGAT employs a two-step parameter aggregation strategy: first, a regularization term ensures parameter similarity across local models during training, and second, adaptive dynamic fusion at the server tailors aggregation weights to regional data characteristics, preventing feature dilution. Experimental results verify that FedHMGAT outperforms conventional FL methods, offering a scalable and privacy-aware solution for electricity demand forecasting. Full article
(This article belongs to the Special Issue AI, Big Data, and IoT for Smart Grids and Electric Vehicles)
Show Figures

Figure 1

22 pages, 1805 KB  
Article
Fault Diagnosis of Wind Turbine Pitch Bearings Based on Online Soft-Label Meta-Learning and Gaussian Prototype Network
by Lianghong Wang, Zhongzhuang Bai, Hongxiang Li, Panpan Yang, Jie Tao, Xuemei Zou, Jinliang Zhao and Chunwei Wang
Energies 2025, 18(16), 4437; https://doi.org/10.3390/en18164437 - 20 Aug 2025
Viewed by 541
Abstract
Meta-learning has demonstrated significant advantages in small-sample tasks and has attracted considerable attention in wind turbine fault diagnosis. However, due to extreme operating conditions and equipment aging, the monitoring data of wind turbines often contain false alarms or missed detections. This results in [...] Read more.
Meta-learning has demonstrated significant advantages in small-sample tasks and has attracted considerable attention in wind turbine fault diagnosis. However, due to extreme operating conditions and equipment aging, the monitoring data of wind turbines often contain false alarms or missed detections. This results in inaccurate fault sample labeling. In meta-learning, these erroneous labels not only fail to help models quickly adapt to new meta-test tasks, but they also interfere with learning for new tasks, which leads to “negative transfer” phenomena. To address this, this paper proposes a novel method called Online Soft-Labeled Meta-learning with Gaussian Prototype Networks (SL-GPN). During training, the method dynamically aggregates feature similarities across multiple tasks or samples to form online soft labels. They guide model training process and effectively solve small-sample bearing fault diagnosis challenges. Experimental tests on small-sample data under various operating conditions and error labels were carried out. The results show that the proposed method improves diagnostic accuracy in small-sample environments, reduces false alarm rates, and demonstrates excellent generalization performance. Full article
Show Figures

Figure 1

19 pages, 9171 KB  
Article
Long-Term Cotton Node Count Prediction Using Feature Selection, Data Augmentation, and Multivariate Time-Series Forecasting
by Vaishnavi Thesma, Glen C. Rains and Javad Mohammadpour Velni
Appl. Sci. 2025, 15(16), 9159; https://doi.org/10.3390/app15169159 - 20 Aug 2025
Viewed by 383
Abstract
In this paper, we present an approach to performing long-term cotton node count prediction using feature selection, data augmentation, and forecasting using a multivariate long short-term memory (LSTM) model. Specifically, we used in situ measurement data that was collected from a cotton research [...] Read more.
In this paper, we present an approach to performing long-term cotton node count prediction using feature selection, data augmentation, and forecasting using a multivariate long short-term memory (LSTM) model. Specifically, we used in situ measurement data that was collected from a cotton research field from Tifton, GA, USA, to perform feature selection to select the most important input measurements and enable random data generation using Gaussian distribution to increase the size of our dataset. We concatenated the generated data to create longer, usable time-series and trained a multivariate LSTM model to predict average cotton node count. Our model’s prediction results on both the training and testing data had a low RMSE of less than 3, low MAE of less than 2.5, and high R2 score of at least 0.85. Our model also showed promise in accurately forecasting cotton node count in subsequent seasons via transfer learning. In particular, our transfer learned model maintained a low RMSE of less than 7.5 and MAE of less than 3.5, despite the subsequent season data having a shorter temporal scale. Moreover, we validated our results by performing hypothesis testing against a similar time-series forecasting model, namely, the gated recurrent unit (GRU) model. Full article
(This article belongs to the Special Issue Deep Learning and Data Mining: Latest Advances and Applications)
Show Figures

Figure 1

Back to TopTop