Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (184)

Search Parameters:
Keywords = tensor decomposition models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 1921 KB  
Article
TinyML for Sustainable Edge Intelligence: Practical Optimization Under Extreme Resource Constraints
by Mohamed Echchidmi and Anas Bouayad
Technologies 2026, 14(4), 215; https://doi.org/10.3390/technologies14040215 - 7 Apr 2026
Abstract
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a [...] Read more.
Deep learning has emerged as an effective tool for automatic waste classification, supporting cleaner cities and more sustainable recycling systems. Because environmental protection is central to the United Nations Sustainable Development Goals (SDGs), improving the sorting and processing of everyday waste is a practical step toward this broader objective. In many real-world settings, however, waste is still sorted manually, which is slow, labor-intensive, and prone to human error. Although convolutional neural networks (CNNs) can automate this task with high accuracy, many state-of-the-art models remain too large and computationally demanding for low-cost edge devices intended for deployment in homes, schools, and small recycling facilities. In this work, we investigate lightweight waste-classification models suitable for TinyML deployment while preserving competitive accuracy. We first benchmark multiple CNN architectures to establish a strong baseline, then apply complementary compression strategies including quantization, pruning, singular value decomposition (SVD) low-rank approximation, and knowledge distillation. In addition, we evaluate an RL-guided multi-teacher selection benchmark that adaptively chooses one teacher per minibatch during distillation to improve student training stability, achieving up to 85% accuracy with only 0.496 M parameters (FP32 ≈ 1.89 MB; INT8 ≈ 0.47 MB). Across all experiments, the best accuracy–size trade-off is obtained by combining knowledge distillation with post-training quantization, reducing the model footprint from approximately 16 MB to 281 KB while maintaining 82% accuracy. The resulting model is feasible for deployment on mobile applications and resource-constrained embedded devices based on model size and TensorFlow Lite Micro compatibility. Full article
42 pages, 1385 KB  
Article
A Variational and Multiplicative Tensor Framework for Eddy Current Modeling in Anisotropic Composite Materials with Defects
by Mario Versaci, Giovanni Angiulli, Francesco Carlo Morabito and Annunziata Palumbo
Mathematics 2026, 14(7), 1141; https://doi.org/10.3390/math14071141 - 28 Mar 2026
Viewed by 231
Abstract
Eddy-current inspection of anisotropic composites, such as aeronautical CFRP, demands models that ensure mathematical rigor, tensorial consistency, and clear energetic interpretation. This work presents a novel unified variational framework with a multiplicative tensor perturbation for the time-harmonic eddy-current problem in anisotropic media with [...] Read more.
Eddy-current inspection of anisotropic composites, such as aeronautical CFRP, demands models that ensure mathematical rigor, tensorial consistency, and clear energetic interpretation. This work presents a novel unified variational framework with a multiplicative tensor perturbation for the time-harmonic eddy-current problem in anisotropic media with defective regions. The formulation is posed in the natural spaces H(curl;Ω)×H1(Ωc), and the well-posedness is established via the Lax–Milgram theorem under physically consistent assumptions on permeability and conductivity. The sesquilinear form admits a Hermitian decomposition that separates dissipative and reactive contributions, revealing the energetic structure of the weak formulation. Defects are modeled through multiplicative modifications of the baseline anisotropic conductivity tensor. This congruence-based approach preserves symmetry and positive definiteness, ensuring non-negative Joule losses and structural stability, allowing a modular representation of subsurface delamination, fiber breakage, conductive inclusions, and distributed porosity within a single tensorial framework. A central result of the present formulation is the reconstruction of the complex power functional from the evaluation of the weak form at the solution, showing that the active dissipated power and the magnetic reactive power arise directly from the same integral terms. Through the complex Poynting theorem, the quadratic form is linked to the internal complex power, establishing a direct connection between the variational formulation and measurable quantities such as probe impedance variations. Simulations of realistic layered CFRP configurations, including single- and multi-defect scenarios, confirm that, compared with additive perturbations, the multiplicative model provides enhanced energetic contrast, particularly in strongly anisotropic and interacting defect conditions. Agreement with experimental measurements, supported by a quantitative comparison of dissipated power variations obtained from controlled EC experiments, corroborates the physical relevance and robustness of the proposed complex power functional. Full article
(This article belongs to the Special Issue Mathematical and Computational Methods for Mechanics and Engineering)
Show Figures

Figure 1

16 pages, 5535 KB  
Article
ADS-B Flight Trajectory Tensor Data Recovery Method Based on Truncated Schatten p-Norm
by Weining Zhang, Hongwei Li, Ziyuan Deng, Qing Cheng and Jinghan Du
Appl. Sci. 2026, 16(7), 3217; https://doi.org/10.3390/app16073217 - 26 Mar 2026
Viewed by 282
Abstract
To address the issue of missing position in flight trajectory data collected by Automatic Dependent Surveillance-Broadcast (ADS-B) systems, a flight trajectory tensor completion model based on truncated Schatten p-norm minimization is proposed. First, the low-rank characteristics of the trajectory set are validated using [...] Read more.
To address the issue of missing position in flight trajectory data collected by Automatic Dependent Surveillance-Broadcast (ADS-B) systems, a flight trajectory tensor completion model based on truncated Schatten p-norm minimization is proposed. First, the low-rank characteristics of the trajectory set are validated using Singular Value Decomposition (SVD); based on this, the data is transformed into a three-dimensional tensor structure. Next, a regularization strategy combining the Schatten p-norm with a singular value truncation mechanism is introduced to construct the trajectory tensor completion model, which suppresses noise and interference from minor components while preserving the main variation patterns of the trajectories. Finally, the model is optimized and solved using the Alternating Direction Method of Multipliers (ADMM) to obtain the completed trajectories. Taking historical ADS-B trajectory data from Orly Airport to Toulouse Airport as an example, the completion results of the proposed model under different missing patterns, missing rates, and flight phases are analyzed from both qualitative and quantitative perspectives. Experimental results show that compared with other representative models, the proposed model achieves the best completion performance under different missing patterns and missing rates; the completion performance during the cruise phase is better than during the ascent and descent phases. The proposed model can serve as a preprocessing technique for flight trajectory data in air traffic, providing more complete and reliable data support for various downstream applications. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

31 pages, 23527 KB  
Article
SLC-Domain SAR RFI Suppression via Sliding-Window Local Tensorization and Energy-Guided CUR Projection
by Qiang Guo, Yuhang Tian, Shuai Huang, Liangang Qi and Sergiy Shulga
Remote Sens. 2026, 18(4), 652; https://doi.org/10.3390/rs18040652 - 20 Feb 2026
Viewed by 360
Abstract
Synthetic aperture radar (SAR) imaging is highly vulnerable to radio-frequency interference (RFI) in complex electromagnetic environments, which can introduce structured artifacts and obscure targets in single-look complex (SLC) products. Most existing suppression methods rely on separability along a single dimension or require interference-specific [...] Read more.
Synthetic aperture radar (SAR) imaging is highly vulnerable to radio-frequency interference (RFI) in complex electromagnetic environments, which can introduce structured artifacts and obscure targets in single-look complex (SLC) products. Most existing suppression methods rely on separability along a single dimension or require interference-specific parameter tuning, limiting robustness under multidimensional coupling and strong scatterers. We propose a range-domain sliding-window local tensorization that rearranges SLC data into localized range–azimuth–block-index tensors to better expose multi-mode correlations. On this representation, an energy-guided tensor CUR low-rank projector is embedded into an alternating-projection scheme that alternates complex-valued soft-thresholding for the sparse scene-plus-noise term and CUR-based projection for the structured RFI term. The cleaned SLC image is obtained by de-tensorizing the estimated RFI component and subtracting it from the input SLC. Experiments on semi-synthetic data, where controlled RFI is superimposed on real SLC scenes, and on real Sentinel-1 SLC data containing RFI demonstrate improved Pearson correlation coefficient (PCC) and perceptual image quality while preserving target signatures and scene textures, particularly under strong interference and strong coupling. The proposed approach provides a practical SLC-domain RFI mitigation tool for post-focusing SAR products without requiring explicit interference parameterization. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

34 pages, 489 KB  
Article
Gauge-Invariant Gravitational Wave Polarization in Metric f(R) Gravity with Cosmological Implications
by Ramesh Radhakrishnan, David McNutt, Delaram Mirfendereski, Alejandro Pinero, Eric Davis, William Julius and Gerald Cleaver
Universe 2026, 12(2), 44; https://doi.org/10.3390/universe12020044 - 5 Feb 2026
Viewed by 958
Abstract
We develop a fully gauge-invariant analysis of gravitational-wave polarizations in metric f(R) gravity with a particular focus on the modified Starobinsky model f(R)=R+αR22Λ, whose constant-curvature solution [...] Read more.
We develop a fully gauge-invariant analysis of gravitational-wave polarizations in metric f(R) gravity with a particular focus on the modified Starobinsky model f(R)=R+αR22Λ, whose constant-curvature solution Rd=4Λ provides a natural de Sitter background for both early- and late-time cosmology. Linearizing the field equations around this background, we derive the Klein–Gordon equation for the curvature perturbation δR and show that the scalar propagating mode acquires a mass mψ2=1/(6α), highlighting how the same scalar degree of freedom governs inflationary dynamics at high curvature and the propagation of gravitational waves in the current accelerating Universe. Using the scalar–vector–tensor decomposition and a decomposition of the perturbed Ricci tensor, we obtain a set of fully gauge-invariant propagation equations that isolate the contributions of the scalar, vector, and tensor modes in the presence of matter. We find that the tensor sector retains the two transverse–traceless polarizations of General Relativity, while the scalar sector contains an additional massive scalar propagating degree of freedom, which manifests through breathing and longitudinal tidal responses depending on the wave regime and detector frame. Through the geodesic deviation equation—computed both in a local Minkowski patch and in fully covariant de Sitter form—we independently recover the same polarization content and identify its tidal signatures. The resulting framework connects the extra scalar polarization to cosmological observables: the massive scalar propagating mode sets the range of the fifth force, influences the time evolution of gravitational potentials, and affects the propagation and dispersion of gravitational waves on cosmological scales. This provides a unified, gauge-invariant link between gravitational-wave phenomenology and the cosmological implications of metric f(R) gravity. Full article
(This article belongs to the Section Gravitation)
Show Figures

Figure 1

23 pages, 1844 KB  
Article
Short-Term Forecast of Tropospheric Zenith Wet Delay Based on TimesNet
by Xuan Zhao, Shouzhou Gu, Jinzhong Mi, Jianquan Dong, Long Xiao and Bin Chu
Sensors 2026, 26(3), 991; https://doi.org/10.3390/s26030991 - 3 Feb 2026
Viewed by 398
Abstract
The tropospheric zenith wet delay (ZWD) serves as a pivotal parameter for atmospheric water vapour inversion. By converting it into precipitable water vapour, high-temporal-resolution atmospheric humidity monitoring becomes feasible, providing crucial support for enhancing short-term rainfall forecast accuracy. However, ZWD exhibits significant non-stationarity [...] Read more.
The tropospheric zenith wet delay (ZWD) serves as a pivotal parameter for atmospheric water vapour inversion. By converting it into precipitable water vapour, high-temporal-resolution atmospheric humidity monitoring becomes feasible, providing crucial support for enhancing short-term rainfall forecast accuracy. However, ZWD exhibits significant non-stationarity due to complex influencing factors, and traditional models struggle to achieve precise predictions across all scenarios owing to limitations in local feature extraction. This article employs a ZWD prediction method based on the dynamic temporal decomposition module of TimesNet, re-constructing one-dimensional high-frequency ZWD time series into two-dimensional tensors to overcome the technical limitations of conventional models. Comprehensively considering topographical characteristics, climatic features, and seasonal factors, experiments were conducted using 30 s ZWD data from 20 IGS stations. This dataset comprised four consecutive days of PPP solutions for each season in 2023. Through comparative experiments with CNN-ATT and Informer models, the global prediction accuracy, seasonal adaptability, and topographical robustness of TimesNet were systematically evaluated. Results demonstrate that under the input-prediction window configuration where each can achieve the optimal accuracy, TimesNet achieves an average seasonal Root Mean Square Error (RMSE) of 5.73 mm across all seasonal station samples, outperforming Informer (7.89 mm) and CNN-ATT (10.02 mm) by 27.4% and 42.8%, respectively. It maintains robust performance under the most challenging conditions—including summer severe convection, high-altitude terrain, and climatically variable maritime zones—while achieving sub-5 mm precision in stable environments. This provides a reliable algorithmic foundation for short-term precipitation forecasting in Global Navigation Satellite System (GNSS) real-time meteorology. Full article
Show Figures

Figure 1

27 pages, 6074 KB  
Article
Automatic Generation of T-Splines with Extraordinary Points Based on Domain Decomposition of Quadrilateral Patches
by João Carlos L. Peixoto, Rafael L. Rangel and Luiz Fernando Martha
Mathematics 2026, 14(3), 392; https://doi.org/10.3390/math14030392 - 23 Jan 2026
Viewed by 407
Abstract
Isogeometric analysis (IGA) is a numerical methodology for solving differential equations by employing basis functions that preserve the exact geometry of the domain. This approach is based on a class of mathematical functions known as NURBS (Non-Uniform Rational B-Splines). Representing a domain with [...] Read more.
Isogeometric analysis (IGA) is a numerical methodology for solving differential equations by employing basis functions that preserve the exact geometry of the domain. This approach is based on a class of mathematical functions known as NURBS (Non-Uniform Rational B-Splines). Representing a domain with NURBS entities often requires multiple patches, especially for complex geometries. Bivariate NURBS, defined as tensor products, enforce global refinements within a patch and, in multi-patch models, these refinements are propagated to other model patches. The use of T-Splines with extraordinary points offers a solution to this issue by enabling local refinements through unstructured meshes. The analysis of T-Spline models is performed using a Bézier extraction technique that relies on extraction operators that map Bézier functions to T-Spline functions. When generating a T-Spline model, careful attention is required to ensure that all T-Spline functions are linearly independent—a necessary condition for IGA—in order to form T-Splines that are suitable for analysis. In this sense, this work proposes a methodology to automate the generation of bidimensional unstructured meshes for IGA through T-Splines with extraordinary points. An algorithm for generating unstructured finite element meshes, based on domain decomposition of quadrilateral patches, is adapted to construct T-Spline models. Validation models demonstrate the methodology’s flexibility in generating locally refined isogeometric models. Full article
(This article belongs to the Special Issue Numerical Modeling and Applications in Mechanical Engineering)
Show Figures

Figure 1

20 pages, 13461 KB  
Article
Multi-View 3D Reconstruction of Ship Hull via Multi-Scale Weighted Neural Radiation Field
by Han Chen, Xuanhe Chu, Ming Li, Yancheng Liu, Jingchun Zhou, Xianping Fu, Siyuan Liu and Fei Yu
J. Mar. Sci. Eng. 2026, 14(2), 229; https://doi.org/10.3390/jmse14020229 - 21 Jan 2026
Viewed by 434
Abstract
The 3D reconstruction of vessel hulls is crucial for enhancing safety, efficiency, and knowledge in the maritime industry. Neural Radiance Fields (NeRFs) are an alternative to 3D reconstruction and rendering from multi-view images; particularly, tensor-based methods have proven effective in improving efficiency. However, [...] Read more.
The 3D reconstruction of vessel hulls is crucial for enhancing safety, efficiency, and knowledge in the maritime industry. Neural Radiance Fields (NeRFs) are an alternative to 3D reconstruction and rendering from multi-view images; particularly, tensor-based methods have proven effective in improving efficiency. However, existing tensor-based methods typically suffer from a lack of spatial coherence, resulting in gaps in the reconstruction of fine-grained geometric structures. This paper proposes a spatial multi-scale weighted NeRF (MDW-NeRF) for accurate and efficient surface reconstruction of vessel hulls. The proposed method develops a novel multi-scale feature decomposition mechanism that models 3D space by leveraging multi-resolution features, facilitating the integration of high-resolution details with low-resolution regional information. We designed separate color and density weighting, using a coarse-to-fine strategy, for density and a weighted matrix for color to decouple feature vectors from appearance attributes. To boost the efficiency of 3D reconstruction and rendering, we implement a hybrid sampling point strategy for volume rendering, selecting sample points based on volumetric density. Extensive experiments on the SVH dataset confirm MDW-NeRF’s superiority: quantitatively, it outperforms TensoRF by 1.5 dB in PSNR and 6.1% in CD, and shrinks the model size by 9%, with comparable training times; qualitatively, it resolves tensor-based methods’ inherent spatial incoherence and fine-grained gaps, enabling accurate restoration of hull cavities and realistic surface texture rendering. These results validate our method’s effectiveness in achieving excellent rendering quality, high reconstruction accuracy, and timeliness. Full article
Show Figures

Figure 1

20 pages, 3939 KB  
Article
Multi-Rate PMU Data Fusion in Power Systems via Low Rank Tensor Train
by Yuan Li, Tao Zheng, Yonghua Chen, Shu Zheng, Jingtao Zhao and Bo Sun
Energies 2026, 19(2), 530; https://doi.org/10.3390/en19020530 - 20 Jan 2026
Viewed by 277
Abstract
With the continuous development of power systems, WAMS have become increasingly important for real-time system monitoring. As the core devices of WAMS, PMUs can provide synchronized, high-precision, and high-resolution measurements of power system states. However, in practical applications, PMUs deployed in different regions [...] Read more.
With the continuous development of power systems, WAMS have become increasingly important for real-time system monitoring. As the core devices of WAMS, PMUs can provide synchronized, high-precision, and high-resolution measurements of power system states. However, in practical applications, PMUs deployed in different regions often operate at different sampling rates, resulting in multi-rate measurement data and posing challenges for data fusion. To address this issue, this paper proposes a multi-rate PMU data fusion method based on low-rank TT. Specifically, the proposed method first performs tensor-based modeling of multi-rate measurement data, embedding multidimensional correlations into a high-order tensor representation. Then, a data completion model is constructed through low-rank TT decomposition to effectively capture cross-timescale dependencies. Finally, an efficient numerical solution is developed to expand low-resolution measurements into high-resolution data, thereby achieving unified data fusion. Case studies on both simulated and real-world PMU measurement data demonstrate that the proposed approach outperforms traditional interpolation and matrix completion methods, achieving superior reconstruction accuracy and robustness. Full article
Show Figures

Figure 1

22 pages, 5177 KB  
Article
Tensor-Train-Based Elastic Wavefield Decomposition in VTI Media
by Youngjae Shin
Appl. Sci. 2026, 16(2), 569; https://doi.org/10.3390/app16020569 - 6 Jan 2026
Viewed by 461
Abstract
Elastic wavefield decomposition into quasi-compressional (qP) and quasi-shear-vertical (qSV) modes is essential for elastic imaging and inversion in VTI media, but becomes computationally expensive when polarization vectors vary strongly in space. I propose a tensor-train (TT) representation of mixed-domain decomposition projectors, constructed via [...] Read more.
Elastic wavefield decomposition into quasi-compressional (qP) and quasi-shear-vertical (qSV) modes is essential for elastic imaging and inversion in VTI media, but becomes computationally expensive when polarization vectors vary strongly in space. I propose a tensor-train (TT) representation of mixed-domain decomposition projectors, constructed via TT-cross with a single user-specified tolerance and applied efficiently using FFT-based operations. A residual-orthogonal strategy extracts qSV from the residual wavefield after qP removal to suppress mode leakage. The method is implemented in Python/PyTorch with GPU acceleration. Numerical experiments on three 2D VTI models (a two-layer benchmark, a BP 2007 benchmark subset, and an Overthrust-based structurally complex model) demonstrate reconstruction errors of 0.094–0.89% for TT, compared to 1.67–6.44% for a conventional CUR low-rank approach (4–46× improvement), with consistently lower cross-talk and near-unity energy ratios. Time-domain receiver traces further confirm that TT yields smaller reconstruction residual spikes and reduced cross-mode leakage than CUR. Runtime tests show that CUR can be faster on smaller grids, whereas TT with GPU acceleration becomes competitive and can outperform CUR for larger models. The TT representation scales linearly with tensor Od Ns r2—enabling practical extension to higher-dimensional projector tensors where conven-tional methods become impractical. Full article
(This article belongs to the Special Issue Exploration Geophysics and Seismic Surveying)
Show Figures

Figure 1

23 pages, 5940 KB  
Article
Research on High-Precision DOA Estimation Method for UAV Platform in Strong Multipath Environment
by Yuxiao Yang, Junjie Li, Qirui Cai and Daisi Yang
Electronics 2026, 15(1), 134; https://doi.org/10.3390/electronics15010134 - 27 Dec 2025
Viewed by 324
Abstract
Utilizing unmanned aerial vehicles (UAVs) to achieve accurate direction finding of radiation sources in hazardous and complex regions is an important means of information recon- naissance. However, the significant multipath effects of UAVs in complex environments cause serious signal coherence problems. Conventional signal [...] Read more.
Utilizing unmanned aerial vehicles (UAVs) to achieve accurate direction finding of radiation sources in hazardous and complex regions is an important means of information recon- naissance. However, the significant multipath effects of UAVs in complex environments cause serious signal coherence problems. Conventional signal decoherence techniques such as spatial smoothing (SS) and matrix reconstruction suffer from array aperture loss, which makes it difficult to meet the requirements of UAVs for high-resolution direction finding in severe multipath environments. Therefore, resolving the signal coherence problem has become a key bottleneck for high-resolution direction-of-arrival (DOA) estimation techniques in severe multipath environments. This paper proposes a joint high-precision DOA estimation method based on conjugate cross-correlation Toeplitz reconstruction and the Parallel Factor Analysis (PARAFAC) tensor model. First, we introduce the conjugate cross-correlation values of array element data collected by the UAV to conduct Toeplitz reconstruction without dimensionality-reduced reconstruction, achieving signal decoherence. Furthermore, we conduct cross-snapshot cross-correlation between the reconstruction matrix and the data of each array element collected by the UAV, which effectively suppresses noise accumulation and improves the signal-to-noise ratio (SNR). Finally, we stack the set of matrices into a three-dimensional tensor, employing PARAFAC tensor decomposition to enhance the UAV DOA estimation performance. Simulation results show that at low SNR, the proposed method can effectively improve estimation accuracy and solve the problem of signal correlation in strong multipath scenarios that limits traditional UAV lateral methods. Full article
Show Figures

Figure 1

18 pages, 971 KB  
Article
Tucker Decomposition-Based Feature Selection and SSA-Optimized Multi-Kernel SVM for Transformer Fault Diagnosis
by Luping Wang and Xiaolong Liu
Sensors 2025, 25(24), 7547; https://doi.org/10.3390/s25247547 - 12 Dec 2025
Viewed by 577
Abstract
Accurate fault diagnosis of power transformers is critical for maintaining grid reliability, yet conventional dissolved gas analysis (DGA) methods face challenges in feature representation and high-dimensional data processing. This paper presents an intelligent diagnostic framework that synergistically integrates systematic feature engineering, tensor decomposition-based [...] Read more.
Accurate fault diagnosis of power transformers is critical for maintaining grid reliability, yet conventional dissolved gas analysis (DGA) methods face challenges in feature representation and high-dimensional data processing. This paper presents an intelligent diagnostic framework that synergistically integrates systematic feature engineering, tensor decomposition-based feature selection, and a sparrow search algorithm (SSA)-optimized multi-kernel support vector machine (MKSVM) for transformer fault classification. The proposed approach first expands the original five-dimensional gas concentration measurements to a twelve-dimensional feature space by incorporating domain-driven IEC 60599 ratio indicators and statistical aggregation descriptors, effectively capturing nonlinear interactions among gas components. Subsequently, a novel Tucker decomposition framework is developed to construct a three-way tensor encoding sample–feature–class relationships, where feature importance is quantified through both discriminative power and structural significance in low-rank representations, successfully reducing dimensionality from twelve to seven critical features while retaining 95% of discriminative information. The multi-kernel SVM architecture combines radial basis function, polynomial, and sigmoid kernels with optimized weights and hyperparameters configured through SSA’s hierarchical producer–scrounger search mechanism. Experimental validation on DGA samples across seven fault categories demonstrates that the proposed method achieves 98.33% classification accuracy, significantly outperforming existing methods, including kernel PCA-based approaches, deep learning models, and ensemble techniques. The framework establishes a reliable and accurate solution for transformer condition monitoring in power systems. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

23 pages, 1181 KB  
Article
Robust Regularized Recursive Least-Squares Algorithm Based on Third-Order Tensor Decomposition
by Radu-Andrei Otopeleanu, Constantin Paleologu, Jacob Benesty, Cristian-Lucian Stanciu, Laura-Maria Dogariu and Ruxandra-Liana Costea
Algorithms 2025, 18(12), 768; https://doi.org/10.3390/a18120768 - 5 Dec 2025
Viewed by 420
Abstract
The decomposition-based adaptive filtering algorithms have recently gained increasing interest due to their capability to reduce the parameter space. In this context, the third-order tensor (TOT) decomposition technique reformulates the conventional approach that involves a single (usually long) adaptive filter by using a [...] Read more.
The decomposition-based adaptive filtering algorithms have recently gained increasing interest due to their capability to reduce the parameter space. In this context, the third-order tensor (TOT) decomposition technique reformulates the conventional approach that involves a single (usually long) adaptive filter by using a combination of three shorter filters via the Kronecker product. This leads to a twofold gain in terms of both performance and complexity. Thus, it can be applied efficiently when operating with more complex algorithms, like the recursive least-squares (RLS) approach. In this paper, we develop an RLS-TOT algorithm with improved robustness features due to a novel regularization method that considers the contribution of the external noise and the so-called model uncertainties (which are related to the system). Simulation results obtained in the framework of echo cancelation support the performance of the proposed algorithm, which outperforms the existing RLS-TOT counterparts, as well as the conventional RLS algorithm that uses the specific regularization technique. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Figure 1

40 pages, 1231 KB  
Review
Quaternionic and Octonionic Frameworks for Quantum Computation: Mathematical Structures, Models, and Fundamental Limitations
by Johan Heriberto Rúa Muñoz, Jorge Eduardo Mahecha Gómez and Santiago Pineda Montoya
Quantum Rep. 2025, 7(4), 55; https://doi.org/10.3390/quantum7040055 - 26 Nov 2025
Viewed by 1348
Abstract
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit [...] Read more.
We develop detailed quaternionic and octonionic frameworks for quantum computation grounded on normed division algebras. Our central result is to prove the polynomial computational equivalence of quaternionic and complex quantum models: Computation over H is polynomially equivalent to the standard complex quantum circuit model and hence captures the same complexity class BQP up to polynomial reductions. Over H, we construct a complete model—quaternionic qubits on right H-modules with quaternion-valued inner products, unitary dynamics, associative tensor products, and universal gate sets—and establish polynomial equivalence with the standard complex model; routes for implementation at fidelities exceeding 99% via pulse-level synthesis on current hardware are discussed. Over O, non-associativity yields path-dependent evolution, ambiguous adjoints/inner products, non-associative tensor products, and possible failure of energy conservation outside associative sectors. We formalize these obstructions and systematize four mitigation strategies: Confinement to associative subalgebras, G2-invariant codes, dynamical decoupling of associator terms, and a seven-factor algebraic decomposition for gate synthesis. The results delineate the feasible quaternionic regime from the constrained octonionic landscape and point to applications in symmetry-protected architectures, algebra-aware simulation, and hypercomplex learning. Full article
Show Figures

Figure 1

31 pages, 3429 KB  
Article
Cross-Modal Attention Fusion: A Deep Learning and Affective Computing Model for Emotion Recognition
by Himanshu Kumar, Martin Aruldoss and Martin Wynn
Multimodal Technol. Interact. 2025, 9(12), 116; https://doi.org/10.3390/mti9120116 - 24 Nov 2025
Cited by 1 | Viewed by 2311
Abstract
Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities. A major challenge in this domain is identifying meaningful correlations between heterogeneous modalities—for example, between [...] Read more.
Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities. A major challenge in this domain is identifying meaningful correlations between heterogeneous modalities—for example, between audio and visual data—due to their distinct temporal and spatial properties. Traditional fusion techniques used in multimodal learning to combine data from different sources often fail to adequately capture meaningful and less computational cross-modal interactions, and struggle to adapt to varying modality reliability. Following a review of the relevant literature, this study adopts an experimental research method to develop and evaluate a mathematical cross-modal fusion model, thereby addressing a gap in the extant research literature. The framework uses the Tucker tensor decomposition to analyse the multi-dimensional array of data into a set of matrices to support the integration of temporal features from audio and spatiotemporal features from visual modalities. A cross-attention mechanism is incorporated to enhance cross-modal interaction, enabling each modality to attend to the relevant information from the other. The efficacy of the model is rigorously evaluated on three publicly available datasets and the results conclusively demonstrate that the proposed fusion technique outperforms conventional fusion methods and several more recent approaches. The findings break new ground in this field of study and will be of interest to researchers and developers in artificial emotional intelligence. Full article
Show Figures

Figure 1

Back to TopTop