Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (281)

Search Parameters:
Keywords = tensor factorization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1844 KB  
Article
Short-Term Forecast of Tropospheric Zenith Wet Delay Based on TimesNet
by Xuan Zhao, Shouzhou Gu, Jinzhong Mi, Jianquan Dong, Long Xiao and Bin Chu
Sensors 2026, 26(3), 991; https://doi.org/10.3390/s26030991 - 3 Feb 2026
Viewed by 136
Abstract
The tropospheric zenith wet delay (ZWD) serves as a pivotal parameter for atmospheric water vapour inversion. By converting it into precipitable water vapour, high-temporal-resolution atmospheric humidity monitoring becomes feasible, providing crucial support for enhancing short-term rainfall forecast accuracy. However, ZWD exhibits significant non-stationarity [...] Read more.
The tropospheric zenith wet delay (ZWD) serves as a pivotal parameter for atmospheric water vapour inversion. By converting it into precipitable water vapour, high-temporal-resolution atmospheric humidity monitoring becomes feasible, providing crucial support for enhancing short-term rainfall forecast accuracy. However, ZWD exhibits significant non-stationarity due to complex influencing factors, and traditional models struggle to achieve precise predictions across all scenarios owing to limitations in local feature extraction. This article employs a ZWD prediction method based on the dynamic temporal decomposition module of TimesNet, re-constructing one-dimensional high-frequency ZWD time series into two-dimensional tensors to overcome the technical limitations of conventional models. Comprehensively considering topographical characteristics, climatic features, and seasonal factors, experiments were conducted using 30 s ZWD data from 20 IGS stations. This dataset comprised four consecutive days of PPP solutions for each season in 2023. Through comparative experiments with CNN-ATT and Informer models, the global prediction accuracy, seasonal adaptability, and topographical robustness of TimesNet were systematically evaluated. Results demonstrate that under the input-prediction window configuration where each can achieve the optimal accuracy, TimesNet achieves an average seasonal Root Mean Square Error (RMSE) of 5.73 mm across all seasonal station samples, outperforming Informer (7.89 mm) and CNN-ATT (10.02 mm) by 27.4% and 42.8%, respectively. It maintains robust performance under the most challenging conditions—including summer severe convection, high-altitude terrain, and climatically variable maritime zones—while achieving sub-5 mm precision in stable environments. This provides a reliable algorithmic foundation for short-term precipitation forecasting in Global Navigation Satellite System (GNSS) real-time meteorology. Full article
Show Figures

Figure 1

31 pages, 465 KB  
Article
Weyl-Type Symmetry and Subalgebra Rigidity in von Neumann Algebras
by Saeed Hashemi Sababe and Mostafa Hassanlou
Mathematics 2026, 14(3), 505; https://doi.org/10.3390/math14030505 - 30 Jan 2026
Viewed by 139
Abstract
We propose and develop a unified framework for Weyl-type symmetry in von Neumann algebras. Motivated by recent automorphism-rigidity phenomena that identify finite Weyl groups inside automorphism groups of crossed products arising from lattice actions on homogeneous spaces, we introduce the Weyl group of [...] Read more.
We propose and develop a unified framework for Weyl-type symmetry in von Neumann algebras. Motivated by recent automorphism-rigidity phenomena that identify finite Weyl groups inside automorphism groups of crossed products arising from lattice actions on homogeneous spaces, we introduce the Weyl group of an inclusion W(M;B):=AutB(M)/InnB(M), for a unital inclusion BM of von Neumann algebras, and investigate its structure across several rigidity regimes. Our main results (1) prove finiteness or triviality of W(M;B) for large classes of nonamenable crossed products, including hyperbolic and product-type actions with spectral gap and malleability; (2) establish a subgroup-normalizer rigidity principle for inclusions L(Λ)L(Γ) that identifies AutL(Λ)(L(Γ)) with a discrete group controlled by NΓ(Λ); (3) show that permutation-type symmetry for product/tensor decompositions is the only possible nontrivial symmetry of the underlying group subalgebras; and (4) extend the analysis to type III factors via Maharam extensions and unique-Cartan phenomena, proving that W(M;B) is discrete and often trivial, leaving only modular flows as outer symmetries. Consequences include new computations of outer automorphism groups, constraints on intermediate subalgebras, and classification consequences for crossed products and amalgamated free products. The methods combine Popa’s intertwining-by-bimodules, spectral-gap and s-malleable deformations, boundary/ucp-map rigidity, and groupoid/Cartan techniques. Full article
27 pages, 1494 KB  
Review
A Survey on Missing Data Generation in Networks
by Qi Shao, Ruizhe Shi, Xiaoyu Zhang and Duxin Chen
Mathematics 2026, 14(2), 341; https://doi.org/10.3390/math14020341 - 20 Jan 2026
Viewed by 178
Abstract
The prevalence of massive, multi-scale, high-dimensional, and dynamic data sets resulting from advances in information and network communication technologies is frequently hampered by data incompleteness, a consequence of complex network structures and constrained sensor capabilities. The necessity of complete data for effective data [...] Read more.
The prevalence of massive, multi-scale, high-dimensional, and dynamic data sets resulting from advances in information and network communication technologies is frequently hampered by data incompleteness, a consequence of complex network structures and constrained sensor capabilities. The necessity of complete data for effective data analysis and mining mandates robust preprocessing techniques. This comprehensive survey systematically reviews missing value interpolation methodologies specifically tailored for time series flow network data, organizing them into four principal categories: classical statistical algorithms, matrix/tensor-based interpolation methods, nearest-neighbor-weighted methods, and deep learning generative models. We detail the evolution and technical underpinnings of diverse approaches, including mean imputation, the ARMA family, matrix factorization, KNN variants, and the latest deep generative paradigms such as GANs, VAEs, normalizing flows, autoregressive models, diffusion probabilistic models, causal generative models, and reinforcement learning generative models. By delineating the strengths and weaknesses across these categories, this survey establishes a structured foundation and offers a forward-looking perspective on state-of-the-art techniques for missing data generation and imputation in complex networks. Full article
(This article belongs to the Special Issue Advanced Machine Learning Research in Complex System)
Show Figures

Figure 1

34 pages, 3406 KB  
Article
Reconstructing Spatial Localization Error Maps via Physics-Informed Tensor Completion for Passive Sensor Systems
by Zhaohang Zhang, Zhen Huang, Chunzhe Wang and Qiaowen Jiang
Sensors 2026, 26(2), 597; https://doi.org/10.3390/s26020597 - 15 Jan 2026
Viewed by 226
Abstract
Accurate mapping of localization error distribution is essential for assessing passive sensor systems and guiding sensor placement. However, conventional analytical methods like the Geometrical Dilution of Precision (GDOP) rely on idealized error models, failing to capture the complex, heterogeneous error distributions typical of [...] Read more.
Accurate mapping of localization error distribution is essential for assessing passive sensor systems and guiding sensor placement. However, conventional analytical methods like the Geometrical Dilution of Precision (GDOP) rely on idealized error models, failing to capture the complex, heterogeneous error distributions typical of real-world environments. To overcome this challenge, we propose a novel data-driven framework that reconstructs high-fidelity localization error maps from sparse observations in TDOA-based systems. Specifically, we model the error distribution as a tensor and formulate the reconstruction as a tensor completion problem. A key innovation is our physics-informed regularization strategy, which incorporates prior knowledge from the analytical error covariance matrix into the tensor factorization process. This allows for robust recovery of the complete error map even from highly incomplete data. Experiments on a real-world dataset validate the superiority of our approach, showing an accuracy improvement of at least 27.96% over state-of-the-art methods. Full article
(This article belongs to the Special Issue Multi-Agent Sensors Systems and Their Applications)
Show Figures

Figure 1

23 pages, 5940 KB  
Article
Research on High-Precision DOA Estimation Method for UAV Platform in Strong Multipath Environment
by Yuxiao Yang, Junjie Li, Qirui Cai and Daisi Yang
Electronics 2026, 15(1), 134; https://doi.org/10.3390/electronics15010134 - 27 Dec 2025
Viewed by 203
Abstract
Utilizing unmanned aerial vehicles (UAVs) to achieve accurate direction finding of radiation sources in hazardous and complex regions is an important means of information recon- naissance. However, the significant multipath effects of UAVs in complex environments cause serious signal coherence problems. Conventional signal [...] Read more.
Utilizing unmanned aerial vehicles (UAVs) to achieve accurate direction finding of radiation sources in hazardous and complex regions is an important means of information recon- naissance. However, the significant multipath effects of UAVs in complex environments cause serious signal coherence problems. Conventional signal decoherence techniques such as spatial smoothing (SS) and matrix reconstruction suffer from array aperture loss, which makes it difficult to meet the requirements of UAVs for high-resolution direction finding in severe multipath environments. Therefore, resolving the signal coherence problem has become a key bottleneck for high-resolution direction-of-arrival (DOA) estimation techniques in severe multipath environments. This paper proposes a joint high-precision DOA estimation method based on conjugate cross-correlation Toeplitz reconstruction and the Parallel Factor Analysis (PARAFAC) tensor model. First, we introduce the conjugate cross-correlation values of array element data collected by the UAV to conduct Toeplitz reconstruction without dimensionality-reduced reconstruction, achieving signal decoherence. Furthermore, we conduct cross-snapshot cross-correlation between the reconstruction matrix and the data of each array element collected by the UAV, which effectively suppresses noise accumulation and improves the signal-to-noise ratio (SNR). Finally, we stack the set of matrices into a three-dimensional tensor, employing PARAFAC tensor decomposition to enhance the UAV DOA estimation performance. Simulation results show that at low SNR, the proposed method can effectively improve estimation accuracy and solve the problem of signal correlation in strong multipath scenarios that limits traditional UAV lateral methods. Full article
Show Figures

Figure 1

20 pages, 4458 KB  
Article
In Situ Calibration Method for an MGT Detection System Based on Helmholtz Coils
by Ziqiang Yuan, Chen Wang, Yanzhang Xie, Yingzi Zhang and Wenyi Liu
Sensors 2026, 26(1), 191; https://doi.org/10.3390/s26010191 - 27 Dec 2025
Viewed by 477
Abstract
Vector magnetometer arrays are essential for ferromagnetic target detection and MGT measurement, but their performance is limited by proportional factor errors, triaxial non-orthogonality, soft/hard iron interference, and inconsistent array orientations. Traditional rotation-based scalar calibration requires magnetic-free turntables or manual multi-orientation operations, introducing mechanical [...] Read more.
Vector magnetometer arrays are essential for ferromagnetic target detection and MGT measurement, but their performance is limited by proportional factor errors, triaxial non-orthogonality, soft/hard iron interference, and inconsistent array orientations. Traditional rotation-based scalar calibration requires magnetic-free turntables or manual multi-orientation operations, introducing mechanical noise, orientation perturbations, and poor repeatability. This paper proposes an in situ rapid calibration method for MGT systems using triaxial Helmholtz coils. By generating three-dimensional magnetic field sequences of constant magnitude and random directions while keeping the sensors stationary, the method replaces conventional rotational excitation. A two-stage rapid calibration algorithm is developed to achieve individual sensor error modeling and array relative calibration. Experimental results show substantial improvements. The tensor invariant CT decreased from 6287.84 nT/m to 7.57 nT/m, with variance reduced from 1.46 × 106 to 13.47 nT2/m2; inter-sensor output differences were suppressed to 1–3 nT; and the magnetic field magnitude error dropped from ~940 nT to 3 × 10−4 nT, achieving a 5–6-order-of-magnitude enhancement. These results verify the method’s effectiveness in eliminating rotational errors, improving array consistency, and enabling high-precision in situ calibration with strong engineering value. Full article
(This article belongs to the Special Issue Advances in Magnetic Field Sensing and Measurement)
Show Figures

Figure 1

54 pages, 4904 KB  
Review
Nonlocal Effective Field Theory and Its Applications
by Ping Wang, Zhengyang Gao, Fangcheng He, Chueng-Ryong Ji, Wally Melnitchouk and Yusupujiang Salamu
Symmetry 2026, 18(1), 31; https://doi.org/10.3390/sym18010031 - 23 Dec 2025
Viewed by 350
Abstract
We review recent applications of nonlocal effective field theory, particularly focusing on nonlocal chiral effective theory and nonlocal quantum electrodynamics (QED), as well as an extension of nonlocal effective theory to curved spacetime. For the chiral effective theory, we discuss the calculation of [...] Read more.
We review recent applications of nonlocal effective field theory, particularly focusing on nonlocal chiral effective theory and nonlocal quantum electrodynamics (QED), as well as an extension of nonlocal effective theory to curved spacetime. For the chiral effective theory, we discuss the calculation of generalized parton distributions (GPDs) of the nucleon at nonzero skewness, along with the corresponding gravitational (or mechanical) form factors, within the convolution framework. In the QED application, we extend the nonlocal formulation to construct the most general nonlocal QED interaction, in which both the propagator and fundamental QED vertex are modified due to the nonlocal Lagrangian, while preserving the Ward–Green–Takahashi identities. For consistency with the modified propagator, a solid quantization is proposed, and the nonlocal QED is applied to explain the lepton g2 anomalies without the introduction of new particles beyond the standard model. Finally, with an extension of the chiral effective action to curved spacetime, we investigate the nonlocal energy–momentum tensor and gravitational form factors of the nucleon with a nonlocal pion–nucleon interaction. Full article
(This article belongs to the Special Issue Chiral Symmetry, and Restoration in Nuclear Dense Matter)
Show Figures

Figure 1

14 pages, 4136 KB  
Article
Tuning Surface-Enhanced Raman Scattering (SERS) via Filling Fraction and Period in Gold-Coated Bullseye Gratings
by Ziqi Li, Yaming Cheng, Carlos Fernandes, Xiaolu Wang and Harry E. Ruda
Nanomaterials 2025, 15(24), 1863; https://doi.org/10.3390/nano15241863 - 11 Dec 2025
Cited by 1 | Viewed by 584
Abstract
Surface-enhanced Raman scattering (SERS) is a highly sensitive analytical technique capable of single-molecule detection, yet its performance strongly depends on the underlying plasmonic architecture. In this study, we developed a robust SERS platform based on long-range–ordered bullseye plasmonic nano-gratings with tunable period and [...] Read more.
Surface-enhanced Raman scattering (SERS) is a highly sensitive analytical technique capable of single-molecule detection, yet its performance strongly depends on the underlying plasmonic architecture. In this study, we developed a robust SERS platform based on long-range–ordered bullseye plasmonic nano-gratings with tunable period and filling fraction, fabricated via electron beam lithography and reactive ion etching and uniformly coated with a thin gold film. These concentric nanostructures support efficient surface plasmon resonance and radial SPP focusing, enabling intense electromagnetic field enhancement across the substrate. Using this platform, we achieved quantitative detection of Rhodamine 6G with enhancement factors of 105. Notably, our results reveal a previously unrecognized mechanistic insight: the geometric configuration producing the strongest local electric fields does not yield the highest SERS enhancement, due to misalignment between the dominant field orientation and the molecular polarizability tensor. This finding explains the non-monotonic dependence of SERS performance on grating geometry and introduces a new design principle in which both field strength and field–molecule alignment must be co-optimized. Overall, this work provides a mechanistic framework for rationally engineering plasmonic substrates for sensitive and quantitative molecular detection. Full article
(This article belongs to the Section Nanophotonics Materials and Devices)
Show Figures

Graphical abstract

20 pages, 359 KB  
Article
The Spacetime Geodesy of Perfect Fluid Spheres
by Christopher Simmonds and Matt Visser
Symmetry 2025, 17(12), 2043; https://doi.org/10.3390/sym17122043 - 1 Dec 2025
Viewed by 381
Abstract
Herein we shall argue for the utility of “spacetime geodesy”, a point of view where one delays as long as possible worrying about dynamical equations, in favour of the maximal utilization of both symmetries and geometrical features. This closely parallels Weinberg’s distinction between [...] Read more.
Herein we shall argue for the utility of “spacetime geodesy”, a point of view where one delays as long as possible worrying about dynamical equations, in favour of the maximal utilization of both symmetries and geometrical features. This closely parallels Weinberg’s distinction between “cosmography” and “cosmology”, wherein maximal utilization of both the symmetries and geometrical features of Friedmann–Lemaître–Robertson–Walker (FLRW) spacetimes is emphasized. This “spacetime geodesy” point of view is particularly useful in those situations where, for one reason or another, the dynamical equations of motion are either uncertain or completely unknown. Several specific examples are discussed—we shall illustrate what can be done by considering the physics implications of demanding spatially isotropic Ricci tensors as a way of automatically implementing the (isotropic) perfect fluid condition, without committing to a specific equation of state. We also consider the structure of the Weyl tensor in spherical symmetry, with and without the (isotropic) perfect fluid condition, and relate this to the notion of “complexity”. In closing, we indicate some ways in which these considerations might be further generalized to more physically complicated (and technically very much more complicated) situations such as axisymmetric spacetimes. Full article
(This article belongs to the Section Physics)
40 pages, 1231 KB  
Review
Quaternionic and Octonionic Frameworks for Quantum Computation: Mathematical Structures, Models, and Fundamental Limitations
by Johan Heriberto Rúa Muñoz, Jorge Eduardo Mahecha Gómez and Santiago Pineda Montoya
Quantum Rep. 2025, 7(4), 55; https://doi.org/10.3390/quantum7040055 - 26 Nov 2025
Viewed by 824
Abstract
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit [...] Read more.
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit model and hence captures the same complexity class BQP up to polynomial reductions. Over H, we construct a complete model—quaternionic qubits on right H-modules with quaternion-valued inner products, unitary dynamics, associative tensor products, and universal gate sets—and establish polynomial equivalence with the standard complex model; routes for implementation at fidelities exceeding 99% via pulse-level synthesis on current hardware are discussed. Over O, non-associativity yields path-dependent evolution, ambiguous adjoints/inner products, non-associative tensor products, and possible failure of energy conservation outside associative sectors. We formalize these obstructions and systematize four mitigation strategies: Confinement to associative subalgebras, G2-invariant codes, dynamical decoupling of associator terms, and a seven-factor algebraic decomposition for gate synthesis. The results delineate the feasible quaternionic regime from the constrained octonionic landscape and point to applications in symmetry-protected architectures, algebra-aware simulation, and hypercomplex learning. Full article
Show Figures

Figure 1

20 pages, 3617 KB  
Review
Advancing Precision Medicine in Degenerative Cervical Myelopathy
by Abdul Al-Shawwa and David W. Cadotte
J. Clin. Med. 2025, 14(23), 8344; https://doi.org/10.3390/jcm14238344 - 24 Nov 2025
Viewed by 1025
Abstract
Degenerative cervical myelopathy (DCM) is the leading cause of nontraumatic spinal cord dysfunction and remains clinically heterogeneous in presentation and course. This review synthesizes current evidence on predictors of neurological outcomes across conventional prognostic factors (clinical and macrostructural metrics) and quantitative neuroimaging (microstructural [...] Read more.
Degenerative cervical myelopathy (DCM) is the leading cause of nontraumatic spinal cord dysfunction and remains clinically heterogeneous in presentation and course. This review synthesizes current evidence on predictors of neurological outcomes across conventional prognostic factors (clinical and macrostructural metrics) and quantitative neuroimaging (microstructural metrics), as well as how machine learning (ML) models integrate these predictors into a precision medicine framework to aid in DCM management. We explore evidence on conventional clinical and radiographic factors. Although several signs and scales are associated with clinical outcomes, cross-study inconsistency and the limits of linear models blunt their standalone utility, underscoring the need for multifactorial modelling. We then assess quantitative MRI biomarkers, including diffusion tensor imaging, magnetization transfer, and myelin water imaging, which index axonal integrity and myelination, thereby enriching risk stratification and prediction. Building on these measurements, we examine ML models combining clinical, imaging, and demographic features to predict postoperative outcomes and, increasingly, the natural history of mild DCM. Finally, current gaps and necessary future directions are outlined, including protocol harmonization, prospective multicentre validation, and clinician–patient education to support equitable uptake. Collectively, this review advances in DCM diagnosis and prognosis, highlighting the role of precision medicine tools for personalized patient care. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

17 pages, 4269 KB  
Article
Bearing Fault Diagnosis Based on Multi-Channel WOA-VMD and Tucker Decomposition
by Lingjiao Chen, Wenxin Pan, Yuezhong Wu, Danjing Xiao, Mingming Xu, Hualian Qin and Zhongmei Wang
Appl. Sci. 2025, 15(22), 12232; https://doi.org/10.3390/app152212232 - 18 Nov 2025
Viewed by 429
Abstract
To address the challenges that rolling bearing vibration signals are easily affected by noise and that traditional single-channel methods cannot fully exploit multi-channel information, this paper proposes a multi-channel fault diagnosis method combining Whale Optimization Algorithm-assisted Variational Mode Decomposition (WOA-VMD) with Tucker tensor [...] Read more.
To address the challenges that rolling bearing vibration signals are easily affected by noise and that traditional single-channel methods cannot fully exploit multi-channel information, this paper proposes a multi-channel fault diagnosis method combining Whale Optimization Algorithm-assisted Variational Mode Decomposition (WOA-VMD) with Tucker tensor decomposition. In this method, multi-channel vibration signals are first adaptively decomposed using WOA-VMD, with optimized decomposition parameters to effectively extract weak fault features. The resulting intrinsic mode functions (IMFs) are then structured into a third-order tensor to preserve inter-channel correlations. Tucker decomposition is subsequently applied to extract robust feature vectors from the tensor factor matrices, achieving dimensionality reduction, redundancy suppression, and enhanced noise mitigation. Finally, statistical features such as standard deviation, kurtosis, and waveform factor are computed from the denoised signals and fed into a Support Vector Machine (SVM) classifier for precise fault identification. Experimental results show that the proposed method outperforms traditional approaches in extracting weak fault features, effectively leveraging correlations among multi-channel signals to extract meaningful features from noise-corrupted signals, and achieving efficient and reliable fault diagnosis. Full article
Show Figures

Figure 1

15 pages, 2155 KB  
Article
Consistent Regularized Non-Negative Tucker Decomposition for Three-Dimensional Tensor Data Representation
by Xiang Gao and Linzhang Lu
Symmetry 2025, 17(11), 1969; https://doi.org/10.3390/sym17111969 - 14 Nov 2025
Viewed by 366
Abstract
Non-negative Tucker decomposition (NTD) is one of the general and prominent decomposition tools designed for high-order tensor data, with its advantages reflected in feature extraction and low-dimensional representation of data. Most NTD-based methods only apply intrinsic and different constraints to the last factor [...] Read more.
Non-negative Tucker decomposition (NTD) is one of the general and prominent decomposition tools designed for high-order tensor data, with its advantages reflected in feature extraction and low-dimensional representation of data. Most NTD-based methods only apply intrinsic and different constraints to the last factor matrix that is a low-dimensional representation of the original tensor information. This processing procedure may result in the loss of the relationship between the factor matrices in all dimensions. To enhance the representation ability of NTD, we propose a consistent regularized non-negative Tucker decomposition for three-dimensional tensor data representation. Consistent regularization is symmetrically presented and mathematically expressed by intrinsic cues in multiple dimensions, that is, manifold structure and orthogonality information. The paired constraint constructed by the double parameter operator is utilized to unlock hidden semantics and maintain the consistent geometric structure of the three-dimensional tensor. Moreover, we develop the iterative updating method based on the multiplicative update rule to solve the proposed model and provide its convergence and computational complexity. The extensive numerical results of unsupervised image clustering experiments on eight real-world datasets demonstrated the feasibility and efficiency of the new method. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

28 pages, 1522 KB  
Review
Toward Precision Post-Stroke Rehabilitation Medicine: Integrating Molecular, Imaging, and Computational Biomarkers for Functional Outcome Prediction
by Roxana Nartea, Simona Savulescu, Claudia Gabriela Potcovaru and Daniela Poenaru
J. Clin. Med. 2025, 14(22), 8077; https://doi.org/10.3390/jcm14228077 - 14 Nov 2025
Viewed by 1287
Abstract
Ischemic stroke remains a leading cause of mortality and long-term disability worldwide, with prognosis influenced by heterogeneous biological and neuroanatomical factors. In the past decade, numerous possible biomarkers—molecular, imaging, and electrophysiological—have been investigated to improve outcome prediction and guide rehabilitation strategies and main [...] Read more.
Ischemic stroke remains a leading cause of mortality and long-term disability worldwide, with prognosis influenced by heterogeneous biological and neuroanatomical factors. In the past decade, numerous possible biomarkers—molecular, imaging, and electrophysiological—have been investigated to improve outcome prediction and guide rehabilitation strategies and main objectives. Among them, neurofilament light chain (NFL), a cytoskeletal protein released during neuroaxonal injury, has become an effective marker of the severity of the neurological condition and the integrity of the neurons. Additional circulating biomarkers, including thioredoxin, netrin-1, omentin-1, bilirubin, and others, have been linked to oxidative stress, angiogenesis, neuroprotection, and regenerative processes. Meanwhile, innovations in electrophysiology (EEG and TMS-based predictions) and neuroimaging (diffusion tensor imaging, corticospinal tract lesion load, and functional connectivity) add some additional perspectives on the possibility for brain recovery. This work is a narrative synthesizing evidence from PubMed, Scopus, and Web of Science between 2015 and 2025, including both clinical and experimental studies addressing stroke biomarkers and outcome prediction. The review outlines a framework for the integration of multimodal biomarkers to support precision medicine and individualized rehabilitation in stroke. Full article
Show Figures

Figure 1

12 pages, 653 KB  
Article
The Glymphatic System and Obesity: A Diffusion Tensor Imaging ALPS Study
by Kang Min Park, Jin-Hong Wi, Bong Soo Park, Dong Ah Lee and Jinseung Kim
Biomedicines 2025, 13(11), 2585; https://doi.org/10.3390/biomedicines13112585 - 22 Oct 2025
Cited by 1 | Viewed by 1026
Abstract
Background: Obesity is a known risk factor for neurodegenerative diseases, potentially due to impaired clearance of brain waste through the glymphatic system. While the association between obesity and brain dysfunction has been widely studied in populations with neurological conditions, it remains unclear [...] Read more.
Background: Obesity is a known risk factor for neurodegenerative diseases, potentially due to impaired clearance of brain waste through the glymphatic system. While the association between obesity and brain dysfunction has been widely studied in populations with neurological conditions, it remains unclear whether glymphatic system function is already reduced in neurologically healthy individuals with obesity. This study aimed to investigate whether glymphatic system function, measured via the diffusion tensor image (DTI) analysis along the perivascular space (DTI-ALPS) index, differs according to obesity status in neurologically healthy adults. Methods: We retrospectively analyzed brain DTI data from 62 neurologically healthy participants stratified into underweight (<18.5 kg/m2), normal weight (BMI ≥ 18.5 and <23.0 kg/m2), overweight (BMI ≥ 23.0 and <25.0 kg/m2), and obese (≥25.0 kg/m2) groups based on the World Health Organization Asia-Pacific body mass index (BMI) criteria. Group differences were examined using Mann–Whitney U tests and analysis of covariance, after adjusting for age. Results: Participants with obesity had significantly lower DTI-ALPS index values (1.262 ± 0.150) compared to those in the normal weight (1.405 ± 0.168, p = 0.048) and overweight (1.423 ± 0.195, p = 0.029) categories, even after adjusting for age. The DTI-ALPS index was also significantly reduced in participants with obesity compared to participants in the BMI < 25 kg/m2 group (1.410 ± 0.176, p = 0.015). Conclusions: This study provides the first evidence that obesity is linked to reduced glymphatic system function, as reflected by lower DTI-ALPS index in neurologically healthy adults. These findings underscore the importance of maintaining a healthy body weight to preserve brain waste clearance mechanisms and may offer insights into early vulnerability to neurodegenerative changes. Full article
(This article belongs to the Section Molecular and Translational Medicine)
Show Figures

Figure 1

Back to TopTop