Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,055)

Search Parameters:
Keywords = different representations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7446 KB  
Article
Seasonal Cycle of the Total Ozone Content over Southern High Latitudes in the CCM SOCOLv3
by Anastasia Imanova, Tatiana Egorova, Vladimir Zubov, Andrey Mironov, Alexander Polyakov, Georgiy Nerobelov and Eugene Rozanov
Atmosphere 2025, 16(10), 1172; https://doi.org/10.3390/atmos16101172 - 9 Oct 2025
Abstract
The severe ozone depletion over the Southern polar region, known as the “ozone hole,” is a stark example of global ozone depletion caused by human-made chemicals. This has implications for climate change and increased harmful surface solar UV. Several Chemistry–Climate models (CCMs) tend [...] Read more.
The severe ozone depletion over the Southern polar region, known as the “ozone hole,” is a stark example of global ozone depletion caused by human-made chemicals. This has implications for climate change and increased harmful surface solar UV. Several Chemistry–Climate models (CCMs) tend to underestimate total column ozone (TCO) against satellite measurements over the Southern polar region. This underestimation can reach up to 50% in monthly mean zonally averaged biases during cold seasons. The most significant discrepancies were found in the CCM SOlar Climate Ozone Links version 3 (SOCOLv3). We use SOCOLv3 to study the sensitivity of Antarctic TCO to three key factors: (1) stratospheric heterogeneous reaction efficiency, (2) meridional flux intensity into polar regions from sub-grid scale mixing, and (3) photodissociation rate calculation accuracy. We compared the model results with satellite data from Infrared Fourier Spectrometer-2 (IKFS-2), Microwave Limb Sounder (MLS), and Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). The most effective processes for improving polar ozone simulation are photolysis and horizontal mixing. Increasing horizontal mixing improves the simulated TCO seasonal cycle but negatively impacts CH4 and N2O distributions. Using the Cloud-J v.8.0 photolysis module has improved photolysis rate calculations and the seasonal ozone cycle representation over the Southern polar region. This paper outlines how different processes impact chemistry–climate model performance in the southern polar stratosphere, with potential implications for future advancements. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

18 pages, 2243 KB  
Article
Small-Micro Park Network Reconfiguration for Enhancing Grid Connection Flexibility
by Fei Liu, Zhenguo Gao, Zikai Li, Dezhong Li, Xueshan Bao and Chuanliang Xiao
Processes 2025, 13(10), 3202; https://doi.org/10.3390/pr13103202 - 9 Oct 2025
Abstract
With the integration of a large number of flexible distributed resources, microgrids have become an important form for supporting the coordinated operation of power sources, grids, loads, and energy storage. The flexibility provided by the point of common coupling is also a crucial [...] Read more.
With the integration of a large number of flexible distributed resources, microgrids have become an important form for supporting the coordinated operation of power sources, grids, loads, and energy storage. The flexibility provided by the point of common coupling is also a crucial regulating resource in power systems. However, due to the complex network constraints within microgrids, such as voltage security and branch capacity limitations, the flexibility of distributed resources cannot be fully reflected at the point of common coupling. Moreover, the flexibility that can be provided externally by different network reconfiguration strategies shows significant differences. Therefore, this paper focuses on optimizing reconfiguration strategies to enhance grid-connected flexibility. Firstly, the representation methods of grid-connected power flexibility and voltage regulation flexibility based on aggregation are introduced. Next, a two-stage robust optimization model aimed at maximizing grid-connected power flexibility is constructed, which comprehensively considers the aggregation of distributed resource flexibility and reconfiguration constraints. The objective is to maximize the grid-connected power flexibility of the small-micro parks. In the first stage of the model, the topology of the small-micro parks is optimized, and the maximum flexibility of all distributed resources is aggregated at the PCC. In the second stage, the feasibility of the solution for the PCC flexible operation range obtained in the first stage is verified. Subsequently, based on strong duality theory and using the column-and-constraint generation algorithm, the model is effectively solved. Case studies show that the proposed method can fully exploit the flexibility of distributed resources through reconfiguration, thereby significantly enhancing the power flexibility and voltage support capability of the small-micro parks network at the PCC. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

19 pages, 1648 KB  
Article
Modality-Enhanced Multimodal Integrated Fusion Attention Model for Sentiment Analysis
by Zhenwei Zhang, Wenyan Wu, Tao Yuan and Guang Feng
Appl. Sci. 2025, 15(19), 10825; https://doi.org/10.3390/app151910825 - 9 Oct 2025
Abstract
Multimodal sentiment analysis aims to utilize multisource information such as text, speech and vision to more comprehensively and accurately identify an individual’s emotional state. However, existing methods still face challenges in practical applications, including modality heterogeneity, insufficient expressive power of non-verbal modalities, and [...] Read more.
Multimodal sentiment analysis aims to utilize multisource information such as text, speech and vision to more comprehensively and accurately identify an individual’s emotional state. However, existing methods still face challenges in practical applications, including modality heterogeneity, insufficient expressive power of non-verbal modalities, and low fusion efficiency. To address these issues, this paper proposes a Modality Enhanced Multimodal Integration Model (MEMMI). First, a modality enhancement module is designed to leverage the semantic guidance capability of the text modality, enhancing the feature representation of non-verbal modalities through a multihead attention mechanism and a dynamic routing strategy. Second, a gated fusion mechanism is introduced to selectively inject speech and visual information into the dominant text modality, enabling robust information completion and noise suppression. Finally, a combined attention fusion module is constructed to synchronously fuse information from all three modalities within a unified architecture, hile a multiscale encoder is used to capture feature representations at different semantic levels. Experimental results on three benchmark datasets—CMU-MOSEI, CMU-MOSI, and CH-SIMS—demonstrate the superiority of the proposed model. On CMU-MOSI, it achieves an Acc-7 of 45.91, with binary accuracy/F1 of 82.86/84.60, MAE of 0.734, and Corr of 0.790, outperforming TFN and MulT by a large margin. On CMU-MOSEI, the model reaches an Acc-7 of 54.17, Acc-2/F1 of 83.69/86.02, MAE of 0.526, and Corr of 0.779, surpassing all baselines, including ALMT. On CH-SIMS, it further achieves 41.88, 66.52, and 77.68 in Acc-5/Acc-3/Acc-2, with F1 of 77.85, MAE of 0.450, and Corr of 0.594, establishing new state-of-the-art performance across datasets. These results confirm that MEMMI achieves state-of-the-art performance across multiple metrics. Furthermore, ablation studies validate the effectiveness of each module in enhancing modality representation and fusion efficiency. Full article
Show Figures

Figure 1

16 pages, 5781 KB  
Article
Design of an Underwater Optical Communication System Based on RT-DETRv2
by Hexi Liang, Hang Li, Minqi Wu, Junchi Zhang, Wenzheng Ni, Baiyan Hu and Yong Ai
Photonics 2025, 12(10), 991; https://doi.org/10.3390/photonics12100991 - 8 Oct 2025
Abstract
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time [...] Read more.
Underwater wireless optical communication (UWOC) is a key technology in ocean resource development, and its link stability is often limited by the difficulty of optical alignment in complex underwater environments. In response to this difficulty, this study has focused on improving the Real-Time Detection Transformer v2 (RT-DETRv2) model. We have improved the underwater light source detection model by collaboratively designing a lightweight backbone network and deformable convolution, constructing a cross-stage local attention mechanism to reduce the number of network parameters, and introducing geometrically adaptive convolution kernels that dynamically adjust the distribution of sampling points, enhance the representation of spot-deformation features, and improve positioning accuracy under optical interference. To verify the effectiveness of the model, we have constructed an underwater light-emitting diode (LED) light-spot detection dataset containing 11,390 images was constructed, covering a transmission distance of 15–40 m, a ±45° deflection angle, and three different light-intensity conditions (noon, evening, and late night). Experiments show that the improved model achieves an average precision at an intersection-over-union threshold of 0.50 (AP50) value of 97.4% on the test set, which is 12.7% higher than the benchmark model. The UWOC system built based on the improved model achieves zero-bit-error-rate communication within a distance of 30 m after assisted alignment (an initial lateral offset angle of 0°–60°), and the bit-error rate remains stable in the 10−7–10−6 range at a distance of 40 m, which is three orders of magnitude lower than the traditional Remotely Operated Vehicle (ROV) underwater optical communication system (a bit-error rate of 10−6–10−3), verifying the strong adaptability of the improved model to complex underwater environments. Full article
Show Figures

Figure 1

13 pages, 287 KB  
Article
TF-IDF-Based Classification of Uzbek Educational Texts
by Khabibulla Madatov, Sapura Sattarova and Jernej Vičič
Appl. Sci. 2025, 15(19), 10808; https://doi.org/10.3390/app151910808 - 8 Oct 2025
Abstract
This paper presents a baseline study on automatic Uzbek text classification. Uzbek is a morphologically rich and low-resource language, which makes reliable preprocessing and evaluation challenging. The approach integrates Term Frequency–Inverse Document Frequency (TF–IDF) representation with three conventional methods: linear regression (LR), k-Nearest [...] Read more.
This paper presents a baseline study on automatic Uzbek text classification. Uzbek is a morphologically rich and low-resource language, which makes reliable preprocessing and evaluation challenging. The approach integrates Term Frequency–Inverse Document Frequency (TF–IDF) representation with three conventional methods: linear regression (LR), k-Nearest Neighbors (k-NN), and cosine similarity (CS, implemented as a 1-NN retrieval model). The objective is to categorize school learning materials by grade level (grades 5–11) to support improved alignment between curricular texts and students’ intellectual development. A balanced dataset of Uzbek school textbooks across different subjects was constructed, preprocessed with standard NLP tools, and converted into TF–IDF vectors. Experimental results on the internal test set of 70 files show that LR achieved 92.9% accuracy (precision = 0.94, recall = 0.93, F1 = 0.93), while CS performed comparably with 91.4% accuracy (precision = 0.92, recall = 0.91, F1 = 0.92). In contrast, k-NN obtained only 28.6% accuracy, confirming its weakness in high-dimensional sparse feature spaces. External evaluation on seven Uzbek literary works further demonstrated that LR and CS yielded consistent and interpretable grade-level mappings, whereas k-NN results were unstable. Overall, the findings establish reliable baselines for Uzbek educational text classification and highlight the potential of extending beyond lexical overlap toward semantically richer models in future work. Full article
23 pages, 2173 KB  
Article
Prototype-Enhanced Few-Shot Relation Extraction Method Based on Cluster Loss Optimization
by Shenyi Qian, Bowen Fu, Chao Liu, Songhe Jin, Tong Sun, Zhen Chen, Daiyi Li, Yifan Sun, Yibing Chen and Yuheng Li
Symmetry 2025, 17(10), 1673; https://doi.org/10.3390/sym17101673 - 7 Oct 2025
Viewed by 63
Abstract
The purpose of few-shot relation extraction (RE) is to recognize the relationship between specific entity pairs in text when there are a limited number of labeled samples. A few-shot RE method based on a prototype network, which constructs relation prototypes by relying on [...] Read more.
The purpose of few-shot relation extraction (RE) is to recognize the relationship between specific entity pairs in text when there are a limited number of labeled samples. A few-shot RE method based on a prototype network, which constructs relation prototypes by relying on the support set to assign labels to query samples, inherently leverages the symmetry between support and query processing. Although these methods have achieved remarkable results, they still face challenges such as the misjudging of noisy samples or outliers, as well as distinguishing semantic similarity relations. To address the aforementioned challenges, we propose a novel semantic enhanced prototype network, which can integrate the semantic information of relations more effectively to promote more expressive representations of instances and relation prototypes, so as to improve the performance of the few-shot RE. Firstly, we design a prompt encoder to uniformly process different prompt templates for instance and relation information, and then utilize the powerful semantic understanding and generation capabilities of large language models (LLMs) to obtain precise semantic representations of instances, their prototypes, and conceptual prototypes. Secondly, graph attention learning techniques are introduced to effectively extract specific-relation features between conceptual prototypes and isomorphic instances while maintaining structural symmetry. Meanwhile, a prototype-level contrastive learning strategy with bidirectional feature symmetry is proposed to predict query instances by integrating the interpretable features of conceptual prototypes and the intra-class shared features captured by instance prototypes. In addition, a clustering loss function was designed to guide the model to learn a discriminative metric space with improved relational symmetry, effectively improving the accuracy of the model’s relationship recognition. Finally, the experimental results on the FewRel1.0 and FewRel2.0 datasets show that the proposed approach delivers improved performance compared to existing advanced models in the task of few-shot RE. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

0 pages, 3358 KB  
Article
Wave-Induced Loads and Fatigue Life of Small Vessels Under Complex Sea States
by Pasqualino Corigliano, Claudio Alacqua, Davide Crisafulli and Giulia Palomba
J. Mar. Sci. Eng. 2025, 13(10), 1920; https://doi.org/10.3390/jmse13101920 - 6 Oct 2025
Viewed by 217
Abstract
The Strait of Messina poses unique challenges for small vessels due to strong currents and complex wave conditions, which critically affect structural integrity and operational safety. This study proposes an integrated methodology that combines seakeeping analysis, a comparison with classification society rules, and [...] Read more.
The Strait of Messina poses unique challenges for small vessels due to strong currents and complex wave conditions, which critically affect structural integrity and operational safety. This study proposes an integrated methodology that combines seakeeping analysis, a comparison with classification society rules, and fatigue life assessment within a unified and computationally efficient framework. A panel-based approach was used to compute vessel motions and vertical bending moments at different speeds and wave directions. Hydrodynamic loads derived from Response Amplitude Operators (RAOs) were compared with regulatory limits and applied to fatigue analysis. A further innovative aspect is the use of high-resolution bathymetric data from the Strait of Messina, enabling a realistic representation of local currents and sea states and providing a more accurate assessment than studies based on idealized conditions. The results show that forward speed amplifies bending moments, reducing safe wave heights from 2 m at rest to about 0.5 m at 16 knots. Fatigue analysis indicates that aluminum hulls are highly vulnerable to 2–3 m waves, while steel and titanium show no significant damage. The proposed workflow is transferable to other vessel types and supports safer design and operation. The case study of the Strait of Messina, the busiest and most challenging maritime corridor in Italy, confirms the validity and practical importance of the approach. By combining hydrodynamic and structural analyses into a single workflow, this study establishes the foundation for predictive maintenance and real-time structural health monitoring, with significant implications for navigation safety in complex sea environments. Full article
(This article belongs to the Special Issue Advanced Studies in Marine Mechanical and Naval Engineering)
Show Figures

Figure 1

22 pages, 1556 KB  
Article
Explainable Instrument Classification: From MFCC Mean-Vector Models to CNNs on MFCC and Mel-Spectrograms with t-SNE and Grad-CAM Insights
by Tommaso Senatori, Daniela Nardone, Michele Lo Giudice and Alessandro Salvini
Information 2025, 16(10), 864; https://doi.org/10.3390/info16100864 - 5 Oct 2025
Viewed by 143
Abstract
This paper presents an automatic system for the classification of musical instruments from audio recordings. The project leverages deep learning (DL) techniques to achieve its objective, exploring three different classification approaches based on distinct input representations. The first method involves the extraction of [...] Read more.
This paper presents an automatic system for the classification of musical instruments from audio recordings. The project leverages deep learning (DL) techniques to achieve its objective, exploring three different classification approaches based on distinct input representations. The first method involves the extraction of Mel-Frequency Cepstral Coefficients (MFCCs) from the audio files, which are then fed into a two-dimensional convolutional neural network (Conv2D). The second approach makes use of mel-spectrogram images as input to a similar Conv2D architecture. The third approach employs conventional machine learning (ML) classifiers, including Logistic Regression, K-Nearest Neighbors, and Random Forest, trained on MFCC-derived feature vectors. To gain insight into the behavior of the DL model, explainability techniques were applied to the Conv2D model using mel-spectrograms, allowing for a better understanding of how the network interprets relevant features for classification. Additionally, t-distributed stochastic neighbor embedding (t-SNE) was employed on the MFCC vectors to visualize how instrument classes are organized in the feature space. One of the main challenges encountered was the class imbalance within the dataset, which was addressed by assigning class-specific weights during training. The results, in terms of classification accuracy, were very satisfactory across all approaches, with the convolutional models and Random Forest achieving around 97–98%, and Logistic Regression yielding slightly lower performance. In conclusion, the proposed methods proved effective for the selected dataset, and future work may focus on further improving class balance techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence for Acoustics and Audio Signal Processing)
Show Figures

Figure 1

20 pages, 59706 KB  
Article
Learning Hierarchically Consistent Disentanglement with Multi-Channel Augmentation for Public Security-Oriented Sketch Person Re-Identification
by Yu Ye, Zhihong Sun and Jun Chen
Sensors 2025, 25(19), 6155; https://doi.org/10.3390/s25196155 - 4 Oct 2025
Viewed by 272
Abstract
Sketch re-identification (Re-ID) aims to retrieve pedestrian photographs in the gallery dataset by a query sketch image drawn by professionals, which is crucial for criminal investigations and missing person searches in the field of public security. The main challenge of this task lies [...] Read more.
Sketch re-identification (Re-ID) aims to retrieve pedestrian photographs in the gallery dataset by a query sketch image drawn by professionals, which is crucial for criminal investigations and missing person searches in the field of public security. The main challenge of this task lies in bridging the significant modality gap between sketches and photos while extracting discriminative modality-invariant features. However, information asymmetry between sketches and RGB photographs, particularly the differences in color information, severely interferes with cross-modal matching processes. To address this challenge, we propose a novel network architecture that integrates multi-channel augmentation with hierarchically consistent disentanglement learning. Specifically, a multi-channel augmentation module is developed to mitigate the interference of color bias in cross-modal matching. Furthermore, a modality-disentangled prototype(MDP) module is introduced to decompose pedestrian representations at the feature level into modality-invariant structural prototypes and modality-specific appearance prototypes. Additionally, a cross-layer decoupling consistency constraint is designed to ensure the semantic coherence of disentangled prototypes across different network layers and to improve the stability of the whole decoupling process. Extensive experimental results on two public datasets demonstrate the superiority of our proposed approach over state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Security for Emerging Intelligent Systems)
Show Figures

Figure 1

20 pages, 4264 KB  
Article
Skeleton-Guided Diffusion for Font Generation
by Li Zhao, Shan Dong, Jiayi Liu, Xijin Zhang, Xiaojiao Gao and Xiaojun Wu
Electronics 2025, 14(19), 3932; https://doi.org/10.3390/electronics14193932 - 3 Oct 2025
Viewed by 134
Abstract
Generating non-standard fonts, such as running script (e.g., XingShu), poses significant challenges due to their high stroke continuity, structural flexibility, and stylistic diversity, which traditional component-based prior knowledge methods struggle to model effectively. While diffusion models excel at capturing continuous feature spaces and [...] Read more.
Generating non-standard fonts, such as running script (e.g., XingShu), poses significant challenges due to their high stroke continuity, structural flexibility, and stylistic diversity, which traditional component-based prior knowledge methods struggle to model effectively. While diffusion models excel at capturing continuous feature spaces and stroke variations through iterative denoising, they face critical limitations: (1) style leakage, where large stylistic differences lead to inconsistent outputs due to noise interference; (2) structural distortion, caused by the absence of explicit structural guidance, resulting in broken strokes or deformed glyphs; and (3) style confusion, where similar font styles are inadequately distinguished, producing ambiguous results. To address these issues, we propose a novel skeleton-guided diffusion model with three key innovations: (1) a skeleton-constrained style rendering module that enforces semantic alignment and balanced energy constraints to amplify critical skeletal features, mitigating style leakage and ensuring stylistic consistency; (2) a cross-scale skeleton preservation module that integrates multi-scale glyph skeleton information through cross-dimensional interactions, effectively modeling macro-level layouts and micro-level stroke details to prevent structural distortions; (3) a contrastive style refinement module that leverages skeleton decomposition and recombination strategies, coupled with contrastive learning on positive and negative samples, to establish robust style representations and disambiguate similar styles. Extensive experiments on diverse font datasets demonstrate that our approach significantly improves the generation quality, achieving superior style fidelity, structural integrity, and style differentiation compared to state-of-the-art diffusion-based font generation methods. Full article
16 pages, 1129 KB  
Article
Building Sub-Saharan African PBPK Populations Reveals Critical Data Gaps: A Case Study on Aflatoxin B1
by Orphélie Lootens, Marthe De Boevre, Sarah De Saeger, Jan Van Bocxlaer and An Vermeulen
Toxins 2025, 17(10), 493; https://doi.org/10.3390/toxins17100493 - 3 Oct 2025
Viewed by 236
Abstract
Physiologically based pharmacokinetic (PBPK) models allow to simulate the behaviour of compounds in diverse physiological populations. However, the categorization of individuals into distinct populations raises questions regarding the classification criteria. In previous research, simulations of the pharmacokinetics of the mycotoxin aflatoxin B1 (AFB1), [...] Read more.
Physiologically based pharmacokinetic (PBPK) models allow to simulate the behaviour of compounds in diverse physiological populations. However, the categorization of individuals into distinct populations raises questions regarding the classification criteria. In previous research, simulations of the pharmacokinetics of the mycotoxin aflatoxin B1 (AFB1), were performed in the black South African population, using PBPK modeling. This study investigates the prevalence of clinical CYP450 phenotypes (CYP2B6, CYP2C9, CYP2C19, CYP2D6, CYP3A4/5) across Sub-Saharan Africa (SSA), to determine the feasibility of defining SSA as a single population. SSA was subdivided into Central, East, South and West Africa. The phenotype data were assigned to the different regions and a fifth SSA group was composed of all regions’ weighted means. Available data from literature only covered 7.30% of Central, 56.9% of East, 38.9% of South and 62.9% of West Africa, clearly indicating critical data gaps. A pairwise proportion test was performed between the regions on enzyme phenotype data. When achieving statistical significance (p < 0.05), a Cohen’s d-test was performed to determine the degree of the difference. Next, per region populations were built using SimCYP starting from the available SSA based SouthAfrican_Population FW_Custom population, supplemented with the phenotype data from literature. Simulations were performed using CYP probe substrates in all populations, and derived PK parameters (Cmax, Tmax, AUCss and CL) were plotted in bar charts. Significant differences between the African regions regarding CYP450 phenotype frequencies were shown for CYP2B6, CYP2C19 and CYP2D6. Limited regional data challenge the representation of SSA populations in these models. The scarce availability of in vivo data for SSA regions restricted the ability to fully validate the developed PBPK populations. However, observed literature data from specific SSA regions provided partial validation, indicating that SSA populations should ideally be modelled at a regional level rather than as a single entity. The findings, emerging from the initial AFB1-focused PBPK work, underscore the need for more extensive and region-specific data to enhance model accuracy and predictive value across SSA. Full article
(This article belongs to the Special Issue Mycotoxins in Food and Feeds: Human Health and Animal Nutrition)
Show Figures

Figure 1

21 pages, 2248 KB  
Article
TSFNet: Temporal-Spatial Fusion Network for Hybrid Brain-Computer Interface
by Yan Zhang, Bo Yin and Xiaoyang Yuan
Sensors 2025, 25(19), 6111; https://doi.org/10.3390/s25196111 - 3 Oct 2025
Viewed by 273
Abstract
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal [...] Read more.
Unimodal brain–computer interfaces (BCIs) often suffer from inherent limitations due to the characteristic of using single modalities. While hybrid BCIs combining electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) offer complementary advantages, effectively integrating their spatiotemporal features remains a challenge due to inherent signal asynchrony. This study aims to develop a novel deep fusion network to achieve synergistic integration of EEG and fNIRS signals for improved classification performance across different tasks. We propose a novel Temporal-Spatial Fusion Network (TSFNet), which consists of two key sublayers: the EEG-fNIRS-guided Fusion (EFGF) layer and the Cross-Attention-based Feature Enhancement (CAFÉ) layer. The EFGF layer extracts temporal features from EEG and spatial features from fNIRS to generate a hybrid attention map, which is utilized to achieve more effective and complementary integration of spatiotemporal information. The CAFÉ layer enables bidirectional interaction between fNIRS and fusion features via a cross-attention mechanism, which enhances the fusion features and selectively filters informative fNIRS representations. Through the two sublayers, TSFNet achieves deep fusion of multimodal features. Finally, TSFNet is evaluated on motor imagery (MI), mental arithmetic (MA), and word generation (WG) classification tasks. Experimental results demonstrate that TSFNet achieves superior classification performance, with average accuracies of 70.18% for MI, 86.26% for MA, and 81.13% for WG, outperforming existing state-of-the-art multimodal algorithms. These findings suggest that TSFNet provides an effective solution for spatiotemporal feature fusion in hybrid BCIs, with potential applications in real-world BCI systems. Full article
Show Figures

Figure 1

15 pages, 472 KB  
Article
Body Mapping as Risk Factors for Non-Communicable Diseases in Ghana: Evidence from Ghana’s 2023 Nationwide Steps Survey
by Pascal Kingsley Mwin, Benjamin Demah Nuertey, Joana Ansong, Edmond Banafo Nartey, Leveana Gyimah, Philip Teg-Nefaah Tabong, Emmanuel Parbie Abbeyquaye, Priscilla Foriwaa Eshun, Yaw Ampem Amoako, Terence Totah, Frank John Lule, Sybil Sory Opoku Asiedu and Abraham Hodgson
Obesities 2025, 5(4), 71; https://doi.org/10.3390/obesities5040071 - 3 Oct 2025
Viewed by 193
Abstract
Non-communicable diseases (NCDs) are the leading global cause of death, causing over 43 million deaths in 2021, including 18 million premature deaths, disproportionately affecting low- and middle-income countries. NCDs also incur significant economic losses, estimated at USD 7 trillion from 2011 to 2025, [...] Read more.
Non-communicable diseases (NCDs) are the leading global cause of death, causing over 43 million deaths in 2021, including 18 million premature deaths, disproportionately affecting low- and middle-income countries. NCDs also incur significant economic losses, estimated at USD 7 trillion from 2011 to 2025, despite low prevention costs. This study evaluated body mapping indicators: body mass index (BMI), waist circumference, and waist-to-hip ratio—for predicting NCD risk, including hypertension, diabetes, and cardiovascular diseases, using data from a nationally representative survey in Ghana. The study sampled 5775 participants via multistage stratified sampling, ensuring proportional representation by region, urban/rural residency, age, and gender. Ethical approval and informed consent were obtained. Anthropometric and biochemical data, including height, weight, waist and hip circumferences, blood pressure, fasting glucose, and lipid profiles, were collected using standardized protocols. Data analysis was conducted with STATA 17.0, accounting for complex survey design. Significant sex-based differences were observed: men were taller and lighter, while women had higher BMI and waist/hip circumferences. NCD prevalence increased with age, peaking at 60–69 years, and was higher in females. Lower education and marital status (widowed, divorced, separated) correlated with higher NCD prevalence. Obesity and high waist circumference strongly predicted NCD risk, but individual anthropometric measures lacked screening accuracy. Integrated screening and tailored interventions are recommended for improved NCD detection and management in resource-limited settings. Full article
Show Figures

Figure 1

31 pages, 11924 KB  
Article
Enhanced 3D Turbulence Models Sensitivity Assessment Under Real Extreme Conditions: Case Study, Santa Catarina River, Mexico
by Mauricio De la Cruz-Ávila and Rosanna Bonasia
Hydrology 2025, 12(10), 260; https://doi.org/10.3390/hydrology12100260 - 2 Oct 2025
Viewed by 217
Abstract
This study compares enhanced turbulence models in a natural river channel 3D simulation under extreme hydrometeorological conditions. Using ANSYS Fluent 2024 R1 and the Volume of Fluid scheme, five RANS closures were evaluated: realizable k–ε, Renormalization-Group k–ε, Shear Stress Transport k–ω, Generalized k–ω, [...] Read more.
This study compares enhanced turbulence models in a natural river channel 3D simulation under extreme hydrometeorological conditions. Using ANSYS Fluent 2024 R1 and the Volume of Fluid scheme, five RANS closures were evaluated: realizable k–ε, Renormalization-Group k–ε, Shear Stress Transport k–ω, Generalized k–ω, and Baseline-Explicit Algebraic Reynolds Stress model. A segment of the Santa Catarina River in Monterrey, Mexico, defined the computational domain, which produced high-energy, non-repeatable real-world flow conditions where hydrometric data were not yet available. Empirical validation was conducted using surface velocity estimations obtained through high-resolution video analysis. Systematic bias was minimized through mesh-independent validation (<1% error) and a benchmarked reference closure, ensuring a fair basis for inter-model comparison. All models were realized on a validated polyhedral mesh with consistent boundary conditions, evaluating performance in terms of mean velocity, turbulent viscosity, strain rate, and vorticity. Mean velocity predictions matched the empirical value of 4.43 [m/s]. The Baseline model offered the highest overall fidelity in turbulent viscosity structure (up to 43 [kg/m·s]) and anisotropy representation. Simulation runtimes ranged from 10 to 16 h, reflecting a computational cost that increases with model complexity but justified by improved flow anisotropy representation. Results show that all models yielded similar mean flow predictions within a narrow error margin. However, they differed notably in resolving low-velocity zones, turbulence intensity, and anisotropy within a purely hydrodynamic framework that does not include sediment transport. Full article
Show Figures

Figure 1

15 pages, 2373 KB  
Article
LLM-Empowered Kolmogorov-Arnold Frequency Learning for Time Series Forecasting in Power Systems
by Zheng Yang, Yang Yu, Shanshan Lin and Yue Zhang
Mathematics 2025, 13(19), 3149; https://doi.org/10.3390/math13193149 - 2 Oct 2025
Viewed by 203
Abstract
With the rapid evolution of artificial intelligence technologies in power systems, data-driven time-series forecasting has become instrumental in enhancing the stability and reliability of power systems, allowing operators to anticipate demand fluctuations and optimize energy distribution. Despite the notable progress made by current [...] Read more.
With the rapid evolution of artificial intelligence technologies in power systems, data-driven time-series forecasting has become instrumental in enhancing the stability and reliability of power systems, allowing operators to anticipate demand fluctuations and optimize energy distribution. Despite the notable progress made by current methods, they are still hindered by two major limitations: most existing models are relatively small in architecture, failing to fully leverage the potential of large-scale models, and they are based on fixed nonlinear mapping functions that cannot adequately capture complex patterns, leading to information loss. To this end, an LLM-Empowered Kolmogorov–Arnold frequency learning (LKFL) is proposed for time series forecasting in power systems, which consists of LLM-based prompt representation learning, KAN-based frequency representation learning, and entropy-oriented cross-modal fusion. Specifically, LKFL first transforms multivariable time-series data into text prompts and leverages a pre-trained LLM to extract semantic-rich prompt representations. It then applies Fast Fourier Transform to convert the time-series data into the frequency domain and employs Kolmogorov–Arnold networks (KAN) to capture multi-scale periodic structures and complex frequency characteristics. Finally, LKFL integrates the prompt and frequency representations through an entropy-oriented cross-modal fusion strategy, which minimizes the semantic gap between different modalities and ensures full integration of complementary information. This comprehensive approach enables LKFL to achieve superior forecasting performance in power systems. Extensive evaluations on five benchmarks verify that LKFL sets a new standard for time-series forecasting in power systems compared with baseline methods. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science, 2nd Edition)
Show Figures

Figure 1

Back to TopTop