Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,032)

Search Parameters:
Keywords = sparse modelling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
43 pages, 805 KB  
Article
Enhanced Deep Reinforcement Learning for Robustness Falsification of Partially Observable Cyber-Physical Systems
by Yangwei Xing, Ting Shu, Xuesong Yin and Jinsong Xia
Symmetry 2026, 18(2), 304; https://doi.org/10.3390/sym18020304 (registering DOI) - 7 Feb 2026
Abstract
Robustness falsification is a critical verification task for ensuring the safety of cyber-physical systems (CPS). Under partially observable conditions, where internal states are hidden and only input–output data is accessible, existing deep reinforcement learning (DRL) approaches for CPS robustness falsification face two key [...] Read more.
Robustness falsification is a critical verification task for ensuring the safety of cyber-physical systems (CPS). Under partially observable conditions, where internal states are hidden and only input–output data is accessible, existing deep reinforcement learning (DRL) approaches for CPS robustness falsification face two key limitations: inadequate temporal modeling due to unidirectional network architectures, and sparse reward signals that impede efficient exploration. These limitations severely undermine the efficacy of DRL in black-box falsification, leading to low success rates and high computational costs. This study addresses these limitations by proposing DRL-BiT-MPR, a novel framework whose core innovation is the synergistic integration of a bidirectional temporal network with a multi-granularity reward function. Specifically, the bidirectional temporal network captures bidirectional temporal dependencies, remedies inadequate temporal modeling, and complements unobservable state information. The multi-granularity reward function includes fine-grained, medium-grained and coarse-grained layers, corresponding to single-step local feedback, phased progress feedback, and global result feedback, respectively, providing multi-time-scale incentives to resolve reward sparsity. Experiments are conducted on three benchmark CPS models: the continuous CARS model, the hybrid discrete-continuous AT model, and the controller-based PTC model. Results show that DRL-BiT-MPR increases the falsification success rate by an average of 39.6% compared to baseline methods and reduces the number of simulations by more than 50.2%. The framework’s robustness is further validated through theoretical analysis of convergence and soundness properties, along with systematic parameter sensitivity studies. Full article
17 pages, 1497 KB  
Article
SPARTA: Sparse Parallel Architecture for Real-Time Threat Analysis for Lightweight Edge Network Defense
by Shi Li, Xiyun Mi, Lin Zhang and Ye Lu
Future Internet 2026, 18(2), 88; https://doi.org/10.3390/fi18020088 - 6 Feb 2026
Abstract
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) [...] Read more.
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) architectures fail to efficiently leverage element-wise sparsity due to the mismatch between fine-grained pruning patterns and the coarse-grained parallelism of Tensor Cores, leading to latency bottlenecks that compromise real-time analysis of high-volume security telemetry. To bridge this gap, we propose SPARTA (Sparse Parallel Architecture for Real-Time Threat Analysis), an algorithm–architecture co-design framework. Specifically, we integrate a hardware-based address remapping interface to enable flexible row-offset access. This mechanism facilitates a novel graph-based column vector merging strategy that aligns sparse data with Tensor Core parallelism, complemented by a pipelined execution scheme to mask decoding latencies. Evaluations on Llama2-7B and Llama2-13B benchmarks demonstrate that SPARTA achieves an average speedup of 2.35× compared to Flash-LLM, with peak speedups reaching 5.05×. These findings indicate that hardware-aware microarchitectural adaptations can effectively mitigate the penalties of unstructured sparsity, providing a viable pathway for efficient deployment in resource-constrained edge security. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

27 pages, 18990 KB  
Article
YOLO11s-UAV: An Advanced Algorithm for Small Object Detection in UAV Aerial Imagery
by Qi Mi, Jianshu Chao, Anqi Chen, Kaiyuan Zhang and Jiahua Lai
J. Imaging 2026, 12(2), 69; https://doi.org/10.3390/jimaging12020069 - 6 Feb 2026
Abstract
Unmanned aerial vehicles (UAVs) are now widely used in various applications, including agriculture, urban traffic management, and search and rescue operations. However, several challenges arise, including the small size of objects occupying only a sparse number of pixels in images, complex backgrounds in [...] Read more.
Unmanned aerial vehicles (UAVs) are now widely used in various applications, including agriculture, urban traffic management, and search and rescue operations. However, several challenges arise, including the small size of objects occupying only a sparse number of pixels in images, complex backgrounds in aerial footage, and limited computational resources onboard. To address these issues, this paper proposes an improved UAV-based small object detection algorithm, YOLO11s-UAV, specifically designed for aerial imagery. Firstly, we introduce a novel FPN, called Content-Aware Reassembly and Interaction Feature Pyramid Network (CARIFPN), which significantly enhances small object feature detection while reducing redundant network structures. Secondly, we apply a new downsampling convolution for small object feature extraction, called Space-to-Depth for Dilation-wise Residual Convolution (S2DResConv), in the model’s backbone. This module effectively eliminates information loss caused by pooling operations and facilitates the capture of multi-scale context. Finally, we integrate a simple, parameter-free attention module (SimAM) with C3k2 to form Flexible SimAM (FlexSimAM), which is applied throughout the entire model. This improved module not only reduces the model’s complexity but also enables efficient enhancement of small object features in complex scenarios. Experimental results demonstrate that on the VisDrone-DET2019 dataset, our model improves mAP@0.5 by 7.8% on the validation set (reaching 46.0%) and by 5.9% on the test set (increasing to 37.3%) compared to the baseline YOLO11s, while reducing model parameters by 55.3%. Similarly, it achieves a 7.2% improvement on the TinyPerson dataset and a 3.0% increase on UAVDT-DET. Deployment on the NVIDIA Jetson Orin NX SUPER platform shows that our model achieves 33 FPS, which is 21.4% lower than YOLO11s, confirming its feasibility for real-time onboard UAV applications. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
21 pages, 9252 KB  
Article
Intelligent Interpolation of OBN Multi-Component Seismic Data Using a Frequency-Domain Residual-Attention U-Net
by Jiawei Zhang and Pengfei Yu
J. Mar. Sci. Eng. 2026, 14(3), 317; https://doi.org/10.3390/jmse14030317 - 6 Feb 2026
Abstract
In modern marine seismic exploration, ocean bottom node (OBN) acquisition systems are increasingly valued for their flexibility in deep-water complex structural surveys. However, the high operational costs associated with OBN systems often lead to spatially sparse sampling, which adversely affects the fidelity of [...] Read more.
In modern marine seismic exploration, ocean bottom node (OBN) acquisition systems are increasingly valued for their flexibility in deep-water complex structural surveys. However, the high operational costs associated with OBN systems often lead to spatially sparse sampling, which adversely affects the fidelity of wavefield reconstruction. To overcome these limitations, hybrid deep learning frameworks that integrate physics-driven and data-driven approaches show significant potential for interpolating OBN four-component (4C) seismic data. The proposed frequency-domain residual-attention U-Net (ResAtt-Unet) architecture systematically exploits the inherent physical correlations among 4C data to improve interpolation performance. Specifically, an innovative dual-branch dual-channel network topology is designed to process OBN 4C data by grouping them into complementary P–Z (hydrophone–vertical geophone) and X–Y (horizontal geophone) pairs. A synchronized joint training strategy is employed to optimize parameters across both branches. Comprehensive evaluations demonstrate that the ResAtt-Unet achieves superior performance in component-wise interpolation, particularly in preserving signal fidelity and maintaining frequency-domain characteristics across all seismic components. Future work should focus on expanding the training dataset to include diverse geological scenarios and incorporating domain-specific physical constraints to improve model generalizability. These advancements will support robust seismic interpretation in challenging ocean-bottom environments characterized by complex velocity variations and irregular illumination. Full article
(This article belongs to the Special Issue Modeling and Waveform Inversion of Marine Seismic Data)
Show Figures

Figure 1

23 pages, 31570 KB  
Article
Two-Center Repulsive Coulomb System in a Constant Magnetic Field
by Miguel E. Gómez Quintanar and Adrian M. Escobar-Ruiz
Atoms 2026, 14(2), 11; https://doi.org/10.3390/atoms14020011 - 5 Feb 2026
Abstract
We study the planar repulsive two-center Coulomb system in the presence of a uniform magnetic field perpendicular to the plane, taking the inter-center separation a and the magnetic field strength B as independent control parameters. The free-field system B=0 is Liouville [...] Read more.
We study the planar repulsive two-center Coulomb system in the presence of a uniform magnetic field perpendicular to the plane, taking the inter-center separation a and the magnetic field strength B as independent control parameters. The free-field system B=0 is Liouville integrable and the motion is unbounded. The magnetic confinement introduces nonlinear coupling that breaks integrability and gives rise to chaotic bounded dynamics. Using Poincaré sections and maximal Lyapunov exponents, we characterize the transition from regular motion at aB=0 to mixed regular–chaotic dynamics for aB0. To probe the recoverability of the dynamics, we apply sparse regression techniques to numerical trajectories and assess their ability to capture the equations of motion across mixed dynamical regimes. Our results clarify how magnetic confinement competes with two-center repulsive interactions in governing the emergence of chaos and delineate fundamental limitations of data-driven model discovery in nonintegrable Hamiltonian systems. We further identify an organizing mechanism whereby the repulsive two-center system exhibits locally one-center-like dynamics in the absence of any static confining barrier. Full article
(This article belongs to the Section Atomic, Molecular and Nuclear Spectroscopy and Collisions)
27 pages, 6439 KB  
Article
Contrastive–Transfer-Synergized Dual-Stream Transformer for Hyperspectral Anomaly Detection
by Lei Deng, Jiaju Ying, Qianghui Wang, Yue Cheng and Bing Zhou
Remote Sens. 2026, 18(3), 516; https://doi.org/10.3390/rs18030516 - 5 Feb 2026
Abstract
Hyperspectral anomaly detection (HAD) aims to identify pixels that significantly differ from the background without prior knowledge. While deep learning-based reconstruction methods have shown promise, they often suffer from limited feature representation, inefficient training cycles, and sensitivity to imbalanced data distributions. To address [...] Read more.
Hyperspectral anomaly detection (HAD) aims to identify pixels that significantly differ from the background without prior knowledge. While deep learning-based reconstruction methods have shown promise, they often suffer from limited feature representation, inefficient training cycles, and sensitivity to imbalanced data distributions. To address these challenges, this paper proposes a novel contrastive–transfer-synergized dual-stream transformer for hyperspectral anomaly detection (CTDST-HAD). The framework integrates contrastive learning and transfer learning within a dual-stream architecture, comprising a spatial stream and a spectral stream, which are pre-trained separately and synergistically fine-tuned. Specifically, the spatial stream leverages general visual and hyperspectral-view datasets with adaptive elastic weight consolidation (EWC) to mitigate catastrophic forgetting. The spectral stream employs a variational autoencoder (VAE) enhanced with the RossThick–LiSparseR (R-L) physical-kernel-driven model for spectrally realistic data augmentation. During fine-tuning, spatial and spectral features are fused for pixel-level anomaly detection, with focal loss addressing class imbalance. Extensive experiments on nine real hyperspectral datasets demonstrate that CTDST-HAD outperforms state-of-the-art methods in detection accuracy and efficiency, particularly in complex backgrounds, while maintaining competitive inference speed. Full article
Show Figures

Figure 1

9 pages, 1037 KB  
Proceeding Paper
Hybrid Dictionary–Retrieval-Augmented Generation–Large Language Model for Low-Resource Translation
by Reen-Cheng Wang, Cheng-Kai Yang, Tun-Chieh Yang and Yi-Xuan Tseng
Eng. Proc. 2025, 120(1), 52; https://doi.org/10.3390/engproc2025120052 - 5 Feb 2026
Viewed by 13
Abstract
The rapid decline of linguistic diversity, driven by globalization and technological standardization, presents significant challenges for the preservation of endangered languages, many of which lack sufficient parallel corpora for effective machine translation. Conventional neural translation models perform poorly in such contexts, often failing [...] Read more.
The rapid decline of linguistic diversity, driven by globalization and technological standardization, presents significant challenges for the preservation of endangered languages, many of which lack sufficient parallel corpora for effective machine translation. Conventional neural translation models perform poorly in such contexts, often failing to capture semantic precision, grammatical complexity, and culturally specific nuances. This study addresses these limitations by proposing a hybrid translation framework that combines dictionary-based pre-translation, retrieval-augmented generation, and large language model post-editing. The system is designed to improve translation quality for extremely low-resource languages, with a particular focus on the endangered Paiwan language in Taiwan. In the proposed approach, a handcrafted bilingual dictionary is the first to establish deterministic lexical alignments to generate a symbolically precise intermediate representation. When gaps occur due to missing vocabulary or sparse training data, a retrieval module enriches contextual understanding by dynamically sourcing semantically relevant examples from a vector database. These enriched words are then processed by an instruction-tuned large language model that reorders syntactic structures, inflects verbs appropriately, and resolves lexical ambiguities to produce fluent and culturally coherent translations. The evaluation is conducted on a 250-sentence Paiwan–Mandarin dataset, and the results demonstrate substantial performance gains across key metrics, with cosine similarity increasing from 0.210–0.236 to 0.810–0.846, BLEU scores rising from 1.7–4.4 to 40.8–51.9, and ROUGE-L F1 scores improving from 0.135–0.177 to 0.548–0.632. These results corroborate the effectiveness of the proposed hybrid pipeline in mitigating semantic drift, preserving core meaning, and enhancing linguistic alignment in low-resource settings. Beyond technical performance, the framework contributes to broader efforts in language revitalization and cultural preservation by supporting the transmission of Indigenous knowledge through accurate, contextually grounded, and accessible translations. This research demonstrates that integrating symbolic linguistic resources with retrieval-augmented large language models offers a scalable and efficient solution for endangered language translation and provides a foundation for sustainable digital heritage preservation in multilingual societies. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

20 pages, 8410 KB  
Article
PC-YOLO: Moving Target Detection in Video SAR via YOLO on Principal Components
by Yu Han, Xinrong Wang, Jiaqing Jiang, Chao Xue, Rui Qin and Ganggang Dong
Remote Sens. 2026, 18(3), 510; https://doi.org/10.3390/rs18030510 - 5 Feb 2026
Viewed by 39
Abstract
Video synthetic aperture radar could provide more valuable information than static images. However, it suffers from several difficulties, such as strong clutter, low signal-to-noise ratio, and variable target scale. The task of moving target detection is therefore difficult to achieve. To solve these [...] Read more.
Video synthetic aperture radar could provide more valuable information than static images. However, it suffers from several difficulties, such as strong clutter, low signal-to-noise ratio, and variable target scale. The task of moving target detection is therefore difficult to achieve. To solve these problems, this paper proposes a model and data co-driven learning method called look once on principal components (PC-YOLO). Unlike preceding works, we regarded the imaging scenario as a combination of low-rank and sparse scenes in theory. The former models the global, slowly varying background information, while the latter expresses the localized anomalies. These were then separated using the principal component decomposition technique to reduce the clutter while simultaneously enhancing the moving targets. The resulting principal components were then handled by an improved version of the look once framework. Since the moving targets featured various scales and weak scattering coefficients, the hierarchical attention mechanism and the cross-scale feature fusion strategy were introduced to further improve the detection performance. Finally, multiple rounds of experiments were performed to verify the proposed method, with the results proving that it could achieve more than 30% improvement in mAP compared to classical methods. Full article
Show Figures

Figure 1

22 pages, 3999 KB  
Article
Eye Movement Classification Using Neuromorphic Vision Sensors
by Khadija Iddrisu, Waseem Shariff, Maciej Stec, Noel O’Connor and Suzanne Little
J. Eye Mov. Res. 2026, 19(1), 17; https://doi.org/10.3390/jemr19010017 - 4 Feb 2026
Viewed by 61
Abstract
Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to [...] Read more.
Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to noise. Neuromorphic Vision Sensors, also known as event cameras (ECs), capture pixel-level changes asynchronously and at a high temporal resolution, making them well suited for detecting the swift transitions inherent to eye movements. However, the resulting data are sparse, which makes them less well suited for use with conventional algorithms. Spiking Neural Networks (SNNs) are gaining attention due to their discrete spatio-temporal spike mechanism ideally suited for sparse data. These networks offer a biologically inspired computational paradigm capable of modeling the temporal dynamics captured by event cameras. This study validates the use of Spiking Neural Networks (SNNs) with event cameras for efficient eye movement classification. We manually annotated the EV-Eye dataset, the largest publicly available event-based eye-tracking benchmark, into sequences of saccades and fixations, and we propose a convolutional SNN architecture operating directly on spike streams. Our model achieves an accuracy of 94% and a precision of 0.92 across annotated data from 10 users. As the first work to apply SNNs to eye movement classification using event data, we benchmark our approach against spiking baselines such as SpikingVGG and SpikingDenseNet, and additionally provide a detailed computational complexity comparison between SNN and ANN counterparts. Our results highlight the efficiency and robustness of SNNs for event-based vision tasks, with over one order of magnitude improvement in computational efficiency, with implications for fast and low-power neurocognitive diagnostic systems. Full article
Show Figures

Figure 1

22 pages, 2476 KB  
Article
An Enhanced SegNeXt with Adaptive ROI for a Robust Navigation Line Extraction in Multi-Growth-Stage Maize Fields
by Yuting Zhai, Zongmei Gao, Jian Li, Yang Zhou and Yanlei Xu
Agriculture 2026, 16(3), 367; https://doi.org/10.3390/agriculture16030367 - 4 Feb 2026
Viewed by 77
Abstract
Navigation line extraction is essential for visual navigation in agricultural machinery, yet existing methods often perform poorly in complex environments due to challenges such as weed interference, broken crop rows, and leaf adhesion. To enhance the accuracy and robustness of crop row centerline [...] Read more.
Navigation line extraction is essential for visual navigation in agricultural machinery, yet existing methods often perform poorly in complex environments due to challenges such as weed interference, broken crop rows, and leaf adhesion. To enhance the accuracy and robustness of crop row centerline identification, this study proposes an improved segmentation model based on SegNeXt with integrated adaptive region of interest (ROI) extraction for multi-growth-stage maize row perception. Improvements include constructing a Local module via pooling layers to refine contour features of seedling rows and enhance complementary information across feature maps. A multi-scale fusion attention (MFA) is also designed for adaptive weighted fusion during decoding, improving detail representation and generalization. Additionally, Focal Loss is introduced to mitigate background dominance and strengthen learning from sparse positive samples. An adaptive ROI extraction method was also developed to dynamically focus on navigable regions, thereby improving efficiency and localization accuracy. The outcomes revealed that the proposed model achieves a segmentation accuracy of 95.13% and an IoU of 93.86%. The experimental results show that the proposed algorithm achieves a processing speed of 27 frames per second (fps) on GPU and 16.8 fps on an embedded Jetson TX2 platform. This performance meets the real-time requirements for agricultural machinery operations. This study offers an efficient and reliable perception solution for vision-based navigation in maize fields. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
23 pages, 12128 KB  
Article
DOA Estimation for Underwater Coprime Arrays with Sensor Failure Based on Segmented Array Validation and Multipath Matching Pursuit
by Xiao Chen and Ying Zhang
Algorithms 2026, 19(2), 125; https://doi.org/10.3390/a19020125 - 4 Feb 2026
Viewed by 68
Abstract
Coprime arrays enable enhanced degrees of freedom through the construction of virtual array equivalent signals. However, the presence of large “holes” leads to discontinuous co-arrays, which severely hampers direction-of-arrival (DOA) estimation techniques that rely on uniform array structures. This paper explores the practical [...] Read more.
Coprime arrays enable enhanced degrees of freedom through the construction of virtual array equivalent signals. However, the presence of large “holes” leads to discontinuous co-arrays, which severely hampers direction-of-arrival (DOA) estimation techniques that rely on uniform array structures. This paper explores the practical application of co-array domain signal processing for underwater acoustic coprime arrays. We propose a novel array configuration based on coprime minimum disordered pairs, enabling the formation of continuously connected co-arrays without interpolating. To address the challenge of limited snapshots in underwater environments, DOA estimation can be achieved by utilizing traditional multipath matching pursuit (MMP) algorithms under the proposed continuous co-array implementation scheme. In practical applications, physical array element failures are inevitable, and faulty elements can create holes in the originally continuous co-array. While interpolation techniques can mitigate small gaps, their performance deteriorates significantly in the presence of large holes or uneven data distribution. To overcome these limitations, we introduce a sparse signal recovery (SSR) method using a fragment array data validation technique for sparse DOA estimation with an underwater acoustic coprime array. Based on the designed continuous array expansion scheme, the resulting continuous co-array is used to map the positions of element failures, revealing the gaps in the co-array. A validation model is established for partially continuous sub-arrays within the discontinuous co-array, enabling signal direction estimation based on the fragmented array validation. Both simulation and sea trial results confirm that the proposed approach maximizes the utilization of co-array elements without relying on interpolation or prediction, offering a robust solution for scenarios involving sensor failures. Full article
29 pages, 2849 KB  
Article
From Physical to Virtual Sensors: VSG-SGL for Reliable and Cost-Efficient Environmental Monitoring
by Murad Ali Khan, Qazi Waqas Khan, Ji-Eun Kim, SeungMyeong Jeong, Il-yeop Ahn and Do-Hyeun Kim
Automation 2026, 7(1), 27; https://doi.org/10.3390/automation7010027 - 3 Feb 2026
Viewed by 74
Abstract
Reliable environmental monitoring in remote or sparsely instrumented regions is hindered by the cost, maintenance demands, and inaccessibility of dense physical sensor deployments. To address these challenges, this study introduces VSG-SGL, a unified virtual sensor generation framework that integrates Sparse Gaussian Process Regression [...] Read more.
Reliable environmental monitoring in remote or sparsely instrumented regions is hindered by the cost, maintenance demands, and inaccessibility of dense physical sensor deployments. To address these challenges, this study introduces VSG-SGL, a unified virtual sensor generation framework that integrates Sparse Gaussian Process Regression (SGPR) and Bayesian Ridge Regression (BRR) with deep generative learning via Variational Autoencoders (VAE) and Conditional Tabular GANs (CTGAN). Real meteorological datasets from multiple South Korean cities were preprocessed using thresholding and Isolation Forest anomaly detection and evaluated using distributional alignment (KDE) and sequence-learning validation with BiLSTM and BiGRU models. Experimental findings demonstrate that VAE-augmented virtual sensors provide the most stable and reliable performance. For temperature, VAE maintains predictive errors close to those of BRR and SGPR, reflecting the already well-modeled dynamics of this variable. In contrast, humidity and wind-related variables exhibit measurable gains with VAE; for example, SGPR-based wind speed MAE improves from 0.1848 to 0.1604, while BRR-based wind direction RMSE decreases from 0.1842 to 0.1726. CTGAN augmentation, however, frequently increases error, particularly for humidity and wind speed. Overall, the results establish VAE-enhanced VSG-SGL virtual sensors as a cost-effective and accurate alternative in scenarios where physical sensing is limited or impractical. Full article
24 pages, 7418 KB  
Article
Sex-Specific Role of NPVF Signalling in Homeostatic Control
by Herbert Herzog, Julia Koller and Lei Zhang
Biomolecules 2026, 16(2), 231; https://doi.org/10.3390/biom16020231 - 2 Feb 2026
Viewed by 217
Abstract
Neuropeptide VF (NPVF) is a member of the RFamide family of peptides and is suggested to be involved in homeostatic regulations. However, direct evidence is sparse. Here, we generated a NPVF knockout mouse model to comprehensively investigate its role in energy and glucose [...] Read more.
Neuropeptide VF (NPVF) is a member of the RFamide family of peptides and is suggested to be involved in homeostatic regulations. However, direct evidence is sparse. Here, we generated a NPVF knockout mouse model to comprehensively investigate its role in energy and glucose homeostasis controls. We show that while male Npvf/− mice on chow were WT-like at both room temperature (RT 22 °C) and thermoneutrality (TN 28 °C) with regards to body weight, body composition, and the parameters involved in energy homeostasis, female Npvf−/− mice exhibit significantly reduced water intake at RT and TN regardless of food access, significantly increased the femur bone mineral content at RT and reduced the adiposity at TN. Strikingly, sex differences are absent under high-fat diet (HFD) conditions, with Npvf deletion leading to hyperphagia and increased weight gain in both sexes. Furthermore, Npvf/− mice on chow at RT exhibit normal glucose tolerance and insulin action for both sexes. On a HFD or at TN, Npvf−/− mice display improved and impaired insulin action in females and males, respectively, with female Npvf/− mice at TN further showing an improved glucose tolerance. Collectively, these findings establish NPVF as a key regulator of energy and glucose metabolism with sex dimorphism, and are critically dependent on environmental and nutritional factors. Full article
Show Figures

Figure 1

23 pages, 871 KB  
Article
TLOA: A Power-Adaptive Algorithm Based on Air–Ground Cooperative Jamming
by Wenpeng Wu, Zhenhua Wei, Haiyang You, Zhaoguang Zhang, Chenxi Li, Jianwei Zhan and Shan Zhao
Future Internet 2026, 18(2), 81; https://doi.org/10.3390/fi18020081 - 2 Feb 2026
Viewed by 110
Abstract
Air–ground joint jamming enables three-dimensional, distributed jamming configurations, making it effective against air–ground communication networks with complex, dynamically adjustable links. Once the jamming layout is fixed, dynamic jamming power scheduling becomes essential to conserve energy and prolong jamming duration. However, existing methods suffer [...] Read more.
Air–ground joint jamming enables three-dimensional, distributed jamming configurations, making it effective against air–ground communication networks with complex, dynamically adjustable links. Once the jamming layout is fixed, dynamic jamming power scheduling becomes essential to conserve energy and prolong jamming duration. However, existing methods suffer from poor applicability in such scenarios, primarily due to their sparse deployment and adversarial nature. To address this limitation, this paper develops a set of mathematical models and a dedicated algorithm for air–ground communication countermeasures. Specifically, we (1) randomly select communication nodes to determine the jammer operation sequence; (2) schedule the number of active jammers by sorting transmission path losses in ascending order; and (3) estimate jamming effects using electromagnetic wave propagation characteristics to adjust jamming power dynamically. This approach formally converts the original dynamic, stochastic jamming resource scheduling problem into a static, deterministic one via cognitive certainty of dynamic parameters and deterministic modeling of stochastic factors—enabling rapid adaptation to unknown, dynamic communication power strategies and resolving the coordination challenge in air–ground joint jamming. Experimental results demonstrate that the proposed Transmission Loss Ordering Algorithm (TLOA) extends the system operating duration by up to 41.6% compared to benchmark methods (e.g., genetic algorithm). Full article
(This article belongs to the Special Issue Adversarial Attacks and Cyber Security)
19 pages, 6175 KB  
Article
Dynamic Feature Fusion for Sparse Radar Detection: Motion-Centric BEV Learning with Adaptive Task Balancing
by Yixun Sang, Junjie Cui, Yaoguang Sun, Fan Zhang, Yanting Li and Guoqiang Shi
Sensors 2026, 26(3), 968; https://doi.org/10.3390/s26030968 - 2 Feb 2026
Viewed by 182
Abstract
This paper proposes a novel motion-aware framework to address key challenges in 4D millimeter-wave radar detection for autonomous driving. While existing methods struggle with sparse point clouds and dynamic object characterization, our approach introduces three key innovations: (1) A Bird’s Eye View (BEV) [...] Read more.
This paper proposes a novel motion-aware framework to address key challenges in 4D millimeter-wave radar detection for autonomous driving. While existing methods struggle with sparse point clouds and dynamic object characterization, our approach introduces three key innovations: (1) A Bird’s Eye View (BEV) fusion network incorporating velocity vector decomposition and dynamic gating mechanisms, effectively encoding motion patterns through independent XY-component convolutions; (2) a gradient-aware multi-task balancing scheme using learnable uncertainty parameters and dynamic weight normalization, resolving optimization conflicts between classification and regression tasks; and (3) a two-phase progressive training strategy combining multi-frame pre-training with sparse single-frame refinement. Evaluated on the TJ4D benchmark, our method achieves 33.25% mean Average Precision (mAP)3D with minimal parameter overhead (1.73 M), showing particular superiority in pedestrian detection (+4.16% AP) while maintaining real-time performance (24.4 FPS on embedded platforms). Comprehensive ablation studies validate each component’s contribution, with thermal map visualization demonstrating effective motion pattern learning. This work advances robust perception under challenging conditions through principled motion modeling and efficient architecture design. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Back to TopTop