Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,263)

Search Parameters:
Keywords = matrix computation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7631 KB  
Article
Compressive Strength of Alkali-Activated Recycled Aggregate Concrete Incorporating Nano CNTs/GO After Exposure to Elevated Temperatures
by Chunyang Liu, Yunlong Wang, Yali Gu and Ya Ge
Buildings 2026, 16(7), 1459; https://doi.org/10.3390/buildings16071459 - 7 Apr 2026
Abstract
To investigate the effects of incorporating nanomaterials—carbon nanotubes (CNTs) and graphene oxide (GO)—on the axial compressive mechanical properties of alkali-activated recycled aggregate concrete (AARAC) after high-temperature exposure, this study designed 51 sets of specimens with recycled coarse aggregate replacement rate, nanomaterial content, and [...] Read more.
To investigate the effects of incorporating nanomaterials—carbon nanotubes (CNTs) and graphene oxide (GO)—on the axial compressive mechanical properties of alkali-activated recycled aggregate concrete (AARAC) after high-temperature exposure, this study designed 51 sets of specimens with recycled coarse aggregate replacement rate, nanomaterial content, and temperature as the main parameters. Compression tests were conducted to analyze the failure mode and strength variation in AARAC specimens after heating. In addition, microscopic tests, including X-ray diffraction, scanning electron microscopy, and computed tomography (CT scanning), were performed to analyze the microstructural characteristics of the post-heated AARAC specimens. The results indicate that as the replacement rate of recycled coarse aggregate increased from 0% to 100%, the residual compressive strength after exposure to 600 °C decreased from 33.6 MPa to 19 MPa. When 0.1 wt% of CNTs is added, the compressive strength of AARAC after exposure to a high temperature of 600 °C increases by approximately 30.4% compared to that of AARAC without nanomaterial addition. When 0.1 wt% of CNTs and 0.05 wt% of GO are added, the compressive strength after exposure to a high temperature of 600 °C increases by approximately 44.3%, while the size of scattered fragments upon failure increased, and the failure mode appeared more complete. Microscopic test results indicate that the high-temperature treatment did not cause significant changes in the main phase composition of AARAC. The synergistic effect of the nanomaterials CNTs and GO can fully utilize their functions as nucleation sites, pore fillers, and crack bridging agents. By strengthening the Interfacial Transition Zone between the recycled coarse aggregate and the cement paste, refining the Matrix Pore Structure, dispersing local thermal stress, and suppressing the propagation of high-temperature cracks, the mechanical properties of AARAC after high-temperature exposure can be effectively maintained. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
18 pages, 3374 KB  
Article
Continuous-Time Markov Chain Modelling for Service Life Prediction of Building Elements
by Artur Zbiciak, Dariusz Walasek, Vazgen Bagdasaryan and Eugeniusz Koda
Appl. Sci. 2026, 16(7), 3555; https://doi.org/10.3390/app16073555 - 5 Apr 2026
Viewed by 101
Abstract
A continuous-time Markov chain framework is developed for service life prediction of building assets, and three formulations are compared: a homogeneous generator, a time-varying generator, and a fractional model. The framework delivers survival, density of absorption time, hazard, and mean time to absorption. [...] Read more.
A continuous-time Markov chain framework is developed for service life prediction of building assets, and three formulations are compared: a homogeneous generator, a time-varying generator, and a fractional model. The framework delivers survival, density of absorption time, hazard, and mean time to absorption. For the homogeneous case, state trajectories are computed using matrix exponentials. The time-varying case is solved both by local exponential propagation on a time grid and by direct integration of the Kolmogorov equation. The fractional case is implemented in two independent ways, via a truncated series expansion and via an in-house routine for the Mittag-Leffler function, which also allows the direct evaluation of survival and hazard from the standard fractional relations while avoiding singular behaviour at the origin. This study shows that non-homogeneous rates accelerate deterioration relative to the homogeneous benchmark, whereas fractional dynamics reproduce early-time acceleration followed by a slow decline of the hazard, which is consistent with heavy-tailed survival and longer effective service life. The two fractional solvers provide mutually consistent outputs, which supports the numerical robustness of the approach. The framework is readily applicable to sparse inspection data and short observation windows and provides a transparent basis for comparing modelling assumptions that affect life cycle forecasts used in asset management and maintenance planning. Full article
21 pages, 2244 KB  
Article
Stability Test for Multiplicity of Solutions in Finite Element Analysis of Cracking Structures
by Alberto Franchi, Pietro Crespi, Manuela Scamardo, Helen Miranda and Rejnalda Golemaj
Mathematics 2026, 14(7), 1206; https://doi.org/10.3390/math14071206 - 3 Apr 2026
Viewed by 119
Abstract
Quasi-brittle structures modeled with softening constitutive laws may lose the uniqueness of equilibrium, producing bifurcation and multiple admissible crack evolutions even under symmetric loading. This paper develops a stability test and a constructive multiplicity procedure for finite element cracking analyses formulated as a [...] Read more.
Quasi-brittle structures modeled with softening constitutive laws may lose the uniqueness of equilibrium, producing bifurcation and multiple admissible crack evolutions even under symmetric loading. This paper develops a stability test and a constructive multiplicity procedure for finite element cracking analyses formulated as a Parametric Linear Complementarity Problem (PLCP) solved in tableau form. The approach exploits the pivot sequence of a complementary tableau to monitor stability by tracking the positive definiteness of the reduced active-mode Hessian A^ through a complement condition, without eigenvalue computations. A direct relationship between loss of positive definiteness and the sign of the incremental load factor Δα˙  is established, providing an intrinsic indicator of transition to descending response. When degeneracy occurs, a “void pivot” mechanism is introduced to generate an alternative admissible tableau, enabling a systematic construction of multiple isolated solutions associated with competing crack patterns. The method is demonstrated on a two-notched direct tension specimen with cohesive softening, where symmetric and antisymmetric paths emerge at a critical step. The implementation is compatible with parallelized matrix operations and remains effective in the presence of non-holonomic constraints. Full article
Show Figures

Figure 1

29 pages, 2463 KB  
Article
A Novel Simultaneous Fault Computation Algorithm for Any Asymmetric and Multiconductor Power System: SFPD
by Roberto Benato and Francesco Sanniti
Energies 2026, 19(7), 1770; https://doi.org/10.3390/en19071770 - 3 Apr 2026
Viewed by 130
Abstract
The paper presents SFPD, the new open algorithm developed by the University of Padova (PD in the acronym) for computing the steady-state regime due to any number of simultaneous faults (SF at the beginning of the acronym) both short circuits and open conductors. [...] Read more.
The paper presents SFPD, the new open algorithm developed by the University of Padova (PD in the acronym) for computing the steady-state regime due to any number of simultaneous faults (SF at the beginning of the acronym) both short circuits and open conductors. The algorithm does not have simplified hypotheses, since it benefits from the pre-fault regime based on PFPD_MCA (power flow by University of Padova with multiconductor cell analysis), a multiconductor power flow (developed and published by the first author) which takes into account both the active conductors (i.e., the phases subjected to the impressed voltages) and the passive conductors (i.e., the interfered metallic conductors, namely earth wires of overhead lines, metallic screens and armors of land and submarine cables, enclosures of gas insulated lines, return and earth wires of 2 × 25 kV AC high-speed railway supply system, etc.). Different types of faults are considered, and where they occur (also along the lines), by means of a suitable admittance matrix in phase frame of reference and embedded inside the overall network bus admittance matrix. Some comparisons with simplified approaches are presented in order to demonstrate the power of the method. Eventually, application to the real Italian network is comprehensively shown. Full article
(This article belongs to the Section F1: Electrical Power System)
20 pages, 2304 KB  
Article
AGP-GEMM: Adaptive Grouping and Partitioning Framework for Accelerating Small and Irregular Matrices on CPUs
by Hongzhe Zhou, Lu Lu, Haibiao Yang and Yu Zhang
Computers 2026, 15(4), 223; https://doi.org/10.3390/computers15040223 - 3 Apr 2026
Viewed by 207
Abstract
General Matrix Multiplication (GEMM) is a fundamental computational kernel in scientific computing, serving as the foundation for numerous complex tasks. However, in practical applications, the performance of GEMM is often constrained by irregular matrix dimensions and the diversity of hardware architectures. In particular, [...] Read more.
General Matrix Multiplication (GEMM) is a fundamental computational kernel in scientific computing, serving as the foundation for numerous complex tasks. However, in practical applications, the performance of GEMM is often constrained by irregular matrix dimensions and the diversity of hardware architectures. In particular, when processing small and irregular matrices, GEMM typically exhibits reduced computational efficiency. To address these challenges, this paper proposes a GEMM acceleration method based on an adaptive core grouping strategy. The method consists of two key components: a core grouping mechanism that alleviates workload imbalance among multi-core CPUs, and an adaptive block partitioning algorithm that dynamically selects optimal tiling schemes according to the matrix dimensions, achieving both load balance and cache-friendly data access. Experimental results on the Kunpeng CPU platform demonstrate that the proposed method achieves significant performance improvements compared to the Kunpeng KML math library, reaching a peak acceleration of up to 2.1× and an average speedup of 1.64×. These results validate the effectiveness and efficiency of the proposed approach in handling small and irregular matrix computation scenarios. Full article
(This article belongs to the Special Issue High-Performance Computing (HPC) and Computer Architecture)
Show Figures

Figure 1

26 pages, 5457 KB  
Article
A Perception-Driven Layered Selection and Design Response Model for Traditional Decorative Pattern
by Xiaochen Wang, Ruhe Zhang, Guanyu Hou and Weiwei Wang
Buildings 2026, 16(7), 1416; https://doi.org/10.3390/buildings16071416 - 3 Apr 2026
Viewed by 187
Abstract
Traditional architectural decorative patterns are increasingly reused in contemporary design, yet the link between object selection and design generation often remains experience-driven: public perceptual differences are rarely formalized, and evaluation outcomes seldom constrain generative decisions. This study proposes a perceptual demand-driven layered filtering [...] Read more.
Traditional architectural decorative patterns are increasingly reused in contemporary design, yet the link between object selection and design generation often remains experience-driven: public perceptual differences are rarely formalized, and evaluation outcomes seldom constrain generative decisions. This study proposes a perceptual demand-driven layered filtering and design response model (PD–LFDR) that treats traditional architectural decorative patterns as comparable and traceable design resources. Perceptual inputs from multiple stakeholders are converged via Kansei-based semantic aggregation into four core dimensions—symbolism, heritage authenticity, recognition and regionality—and are organized as a perceptual evaluation matrix. Grey relational analysis (GRA) is then applied using an expected perceptual level as the reference sequence to identify representative pattern samples suitable for design intervention. An empirical study on decorative patterns from Shaanxi vernacular dwellings demonstrates a closed-loop workflow: (i) first-round GRA filters representative theme samples, (ii) a second-round GRA selects operable minimal gene units, and, under a unified parametric rule set and a traceable two-layer parameter basis (parameter domain definition and parameter selection), (iii) multiple alternatives are generated and re-evaluated through a third-round GRA to support scheme selection. Robustness checks indicate stable rankings under moderate parameter and weight variation, improving interpretability, reproducibility, and decision efficiency for the computational translation of regional cultural visual resources. Full article
(This article belongs to the Topic Revitalizing Buildings and Our Urban Heritage)
Show Figures

Figure 1

27 pages, 1956 KB  
Article
A Data-Driven Procedure for Cost and Risk Control in Construction Investments: Quantifying Budget Gaps via Expert Scoring and Probabilistic Simulation—Evidence from a Heritage Hotel Project
by Silvia Dotres-Zúñiga, Libys Martha Zúñiga-Igarza, Alexander Sánchez-Rodríguez, Gelmar García-Vidal, Rodobaldo Martínez-Vivar and Reyner Pérez-Campdesuñer
Buildings 2026, 16(7), 1410; https://doi.org/10.3390/buildings16071410 - 2 Apr 2026
Viewed by 182
Abstract
Risk management is critical to maintain consistency between estimated and actual costs in construction investment projects, especially those that incorporate tourism and heritage components. This study aims to quantify the impact of risk factors on construction investment costs and to estimate an updated [...] Read more.
Risk management is critical to maintain consistency between estimated and actual costs in construction investment projects, especially those that incorporate tourism and heritage components. This study aims to quantify the impact of risk factors on construction investment costs and to estimate an updated maximum project budget at a defined confidence level using an integrated expert-based and probabilistic approach. The approach combines a Frequency–Impact matrix, weighted scaling, and PERT/Monte Carlo simulation, thereby transforming expert judgments into comparable numerical parameters suitable for predictive modeling. The methodology is applied to the rehabilitation of the Esmeralda Hotel project in Cuba, a heritage asset characterized by high cultural value and technical complexity. The results quantify the effects of prioritized risk factors, compute their impact coefficients, and re-estimate the project’s upper budget limit at a 95% confidence level. The findings show that risk drivers associated with higher-complexity construction processes concentrate the main vulnerabilities and explain most of the increase in total cost. In addition, the analysis indicates that contingency margins established by regulation are insufficient to absorb the project’s observed variability. The proposed model supports proactive budget control by anticipating cost deviations, improving resource allocation, and strengthening decision-making under high uncertainty. Its flexible structure enables adaptation to different project types and serves as a practical decision-support tool for investors, designers, and project managers seeking greater financial accuracy and reduced risk of cost overruns. Full article
Show Figures

Figure 1

30 pages, 4624 KB  
Article
Prediction of Thermal Degradation in Concrete Structural Elements Using Optimized Artificial Neural Networks and Metaheuristic Algorithms
by Hatice Elif Beytekin, Yahya Kaya, Ali Mardani, Hasan Tahsin Öztürk and Filiz Şenkal Sezer
Buildings 2026, 16(7), 1405; https://doi.org/10.3390/buildings16071405 - 2 Apr 2026
Viewed by 251
Abstract
Accurate prediction of temperature-induced degradation in concrete is essential for improving structural fire safety and supporting reliable post-fire engineering decisions. However, previous studies have generally focused on conventional machine learning applications or limited optimization strategies, while integrated frameworks combining systematic input screening, robust [...] Read more.
Accurate prediction of temperature-induced degradation in concrete is essential for improving structural fire safety and supporting reliable post-fire engineering decisions. However, previous studies have generally focused on conventional machine learning applications or limited optimization strategies, while integrated frameworks combining systematic input screening, robust validation, large-scale metaheuristic optimization, and interpretable analysis remain limited. This study aims to develop a comprehensive predictive framework for estimating the temperature-induced weight loss and compressive strength of concrete using advanced machine learning techniques. First, a detailed collinearity analysis was performed to filter the input dataset, eliminate redundant correlations, and improve statistical reliability. For modeling consistency, all fiber-containing mixtures were treated as polymer-fiber systems, and fiber-related variables were interpreted as polymer-fiber descriptors. To reduce overfitting and ensure robust validation, 5-fold cross-validation was applied during training, while 23% of the dataset was reserved as a strictly independent test set. In addition, 25 metaheuristic algorithms were evaluated under a standardized computational budget of 5000 function evaluations to perform neural architecture search. The results showed that the Marine Predators Algorithm (MPA), Symbiotic Organisms Search (SOS), and Kepler Optimization Algorithm (KOA) achieved superior convergence behavior in optimizing hybrid Levenberg–Marquardt-trained networks. SHapley Additive exPlanations (SHAP)-based sensitivity analysis further revealed that matrix-related properties, particularly unit weight and water absorption capacity, were the dominant drivers of thermal degradation. Overall, the proposed framework provides not only a robust benchmarking platform for predictive modeling but also a practically relevant and interpretable tool for post-fire structural assessment and thermally resilient concrete design. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

38 pages, 8327 KB  
Review
Functional Peptides: Comparing Synthetic and Sequence-Engineered Antibiofilm Pharmaceutics
by Bilal Aslam, Muhammad Hassan Khalid and Sulaiman F. Aljasir
Pharmaceutics 2026, 18(4), 441; https://doi.org/10.3390/pharmaceutics18040441 - 2 Apr 2026
Viewed by 425
Abstract
Biofilm formation is a complex phenomenon employed by microbes to counteract antimicrobials. Biofilm-associated infections are a challenging threat to modern medicine. Antimicrobial peptides (AMPs) are recognized as some of the most promising therapeutics to tackle biofilm-producing and multidrug-resistant (MDR) pathogens. However, stability, toxicity, [...] Read more.
Biofilm formation is a complex phenomenon employed by microbes to counteract antimicrobials. Biofilm-associated infections are a challenging threat to modern medicine. Antimicrobial peptides (AMPs) are recognized as some of the most promising therapeutics to tackle biofilm-producing and multidrug-resistant (MDR) pathogens. However, stability, toxicity, and potency are key issues in the case of naturally occurring AMPs. Next-generation antibiofilm tools, such as synthetic or engineered AMPs, have emerged as a potent therapeutic choice. Synthetic peptides offer structural simplicity, versatility for chemical modification, and increased stability, which makes them capable of effectively disrupting both the biofilm matrix and the bacterial membrane. For engineered peptides, rational sequence modification, hybridization, and computational design are used to overcome limitations related to selectivity, biofilm-specific targeting and regulatory pathway modulation. This review provides a critical evaluation of synthetic and engineered AMPs from various perspectives, such as design strategies, antibiofilm action mechanisms, therapeutic performance, and translational potential. This study sheds light on current advances and emerging technologies, including AI-guided peptide optimization and multifunctional peptide platforms, and thereby sets the stage for the rational development of peptide-based therapeutics aimed at overcoming biofilm-mediated antimicrobial resistance (AMR). Full article
(This article belongs to the Special Issue Antimicrobial Peptides as Promising Therapeutic Agents)
Show Figures

Figure 1

23 pages, 5349 KB  
Article
Target Tracking-Based Online Calibration of UAV Electro-Optical Pod Installation Errors
by Yong Xu, Jin Liu, Hongtao Yan, An Wang, Haihang Xu, Yue Ma and Tian Yao
Automation 2026, 7(2), 59; https://doi.org/10.3390/automation7020059 - 1 Apr 2026
Viewed by 300
Abstract
As the “visual perception hub” of unmanned aerial vehicles (UAVs), electro-optical (EO) pods play an increasingly critical role in tasks such as intelligence gathering, situational awareness, target tracking, and localization. With the expanding scope and depth of UAV applications, higher demands are placed [...] Read more.
As the “visual perception hub” of unmanned aerial vehicles (UAVs), electro-optical (EO) pods play an increasingly critical role in tasks such as intelligence gathering, situational awareness, target tracking, and localization. With the expanding scope and depth of UAV applications, higher demands are placed on the precision and adaptability of installation error calibration techniques for EO pods. Current mainstream calibration methods typically require specialized procedures under constrained conditions, while few approaches integrate existing UAV system capabilities and mission requirements, which leads to cumbersome, time-consuming processes and suboptimal alignment between calibration outcomes and task objectives. This paper proposes an online calibration method for UAV EO pod installation errors based on target tracking, which can rapidly compute the optimal closed-form solution for installation errors by leveraging UAV tracking missions. First, an observation equation for pod installation errors is established using tracking results. Second, multi-temporal observations are combined to model the calibration problem as an optimal rotation matrix estimation task, and then the optimal closed-form solution for installation errors is derived. Concurrently, a statistics-based approximate calibration method is introduced specifically for tracking missions. Furthermore, an online calibration system compatible with diverse UAV platforms is designed, along with different rapid calibration schemes for emergency response scenarios, fully incorporating existing system capabilities and mission needs. Finally, a fixed-wing UAV experimental platform is developed, with calibration tests conducted under various flight regimes. Experimental results validate the feasibility and robustness of the proposed methodology. Full article
Show Figures

Figure 1

22 pages, 831 KB  
Article
Energy-Efficient Dual-Core RISC-V Architecture for Edge AI Acceleration with Dynamic MAC Unit Reuse
by Cristian Andy Tanase
Computers 2026, 15(4), 219; https://doi.org/10.3390/computers15040219 - 1 Apr 2026
Viewed by 359
Abstract
This paper presents a dual-core RISC-V architecture designed for energy-efficient AI acceleration at the edge, featuring dynamic MAC unit sharing, frequency scaling (DFS), and FIFO-based resource arbitration. The system comprises two RISC-V cores that compete for shared computational resources—a single Multiply–Accumulate (MAC) unit [...] Read more.
This paper presents a dual-core RISC-V architecture designed for energy-efficient AI acceleration at the edge, featuring dynamic MAC unit sharing, frequency scaling (DFS), and FIFO-based resource arbitration. The system comprises two RISC-V cores that compete for shared computational resources—a single Multiply–Accumulate (MAC) unit and a shared external memory subsystem—governed by a channel-based arbitration mechanism with CPU-priority semantics, while each core maintains private instruction and data caches. The architecture implements a tightly coupled Neural Processing Unit (NPU) with CONV, GEMM, and POOL operations that execute opportunistically in the background when the MAC unit is available. Dynamic frequency scaling (DFS) with three levels (100/200/400 MHz) is applied to the shared MAC unit, allowing the dynamic acceleration of CNN workloads. The arbitration mechanism uses SystemC sc_fifo channels with CPU-priority polling, ensuring that CPU execution is minimally impacted by background AI processing while the NPU makes progress during idle MAC slots. The NPU supports 3 × 3 convolutions, matrix multiplication (GEMM) with 10 × 10 tiles, and pooling operations. The implementation is cycle-accurate in SystemC, targeting FPGA deployment. Experimental evaluation demonstrates that the dual-core architecture achieves 1.87× speedup with 93.5% efficiency for parallel workloads, while DFS enables 70% power reduction at low frequency. The system successfully executes simultaneous CPU and AI workloads, with CPU-priority arbitration ensuring no CPU starvation under contention. The proposed design offers a practical solution for embedded AI applications requiring both general-purpose computation and neural network acceleration, validated through comprehensive SystemC simulation on modern FPGA platforms. Full article
(This article belongs to the Special Issue High-Performance Computing (HPC) and Computer Architecture)
Show Figures

Figure 1

27 pages, 2884 KB  
Review
Real-Time AI-Driven Prognostics and Health Management in Robotics
by Mohad Tanveer, Muhammad Haris Yazdani, Rana Talal Ahmad Khan and Heung Soo Kim
Appl. Sci. 2026, 16(7), 3441; https://doi.org/10.3390/app16073441 - 1 Apr 2026
Viewed by 268
Abstract
The increasing deployment of robotic systems in complex and high-stakes environments, such as advanced manufacturing, healthcare, space exploration, and service robotics, requires robust strategies to ensure operational reliability, safety, and predictive maintenance. Real-time prognostics and health management, supported by recent advances in artificial [...] Read more.
The increasing deployment of robotic systems in complex and high-stakes environments, such as advanced manufacturing, healthcare, space exploration, and service robotics, requires robust strategies to ensure operational reliability, safety, and predictive maintenance. Real-time prognostics and health management, supported by recent advances in artificial intelligence, has emerged as a powerful approach for monitoring system health, detecting faults, and predicting failures before they occur. Unlike earlier review studies that mainly summarize traditional machine learning applications, the novelty of this paper lies in presenting a comprehensive taxonomy and critical synthesis of state-of-the-art AI-driven PHM techniques designed specifically for robotic systems. We evaluate a wide range of approaches, beginning with conventional machine learning models and extending to recent deep learning advancements, including transformers, vision transformers, and self-supervised learning frameworks. Furthermore, a novel contribution of this study is the rigorous benchmarking of their real-time feasibility, computational complexity, scalability, and performance trade-offs in practical robotic applications. In addition, this review introduces widely used benchmark datasets and highlights representative industrial case studies that demonstrate the practical effectiveness of AI-enabled PHM systems. The study also discusses important research gaps, including challenges related to model interpretability addressed through eXplainable AI, data privacy supported by federated learning, and the integration of cloud and edge computing within cloud robotics frameworks. Through a comprehensive gap matrix and quantitative comparative evaluations, this review provides insights to support the development of resilient, interpretable, and intelligent PHM systems for next-generation robotic applications. Full article
(This article belongs to the Special Issue Deep Learning and Predictive Maintenance in Industrial Applications)
Show Figures

Figure 1

18 pages, 2493 KB  
Article
Deep Learning-Based Receiver for Low-Complexity 6G Partial LIS Architectures
by Mário Marques da Silva, Héctor Orrillo and Rui Dinis
Appl. Sci. 2026, 16(7), 3429; https://doi.org/10.3390/app16073429 - 1 Apr 2026
Viewed by 219
Abstract
The sixth generation (6G) of wireless networks demands extreme energy efficiency and massive connectivity, positioning large intelligent surfaces (LIS) as a pivotal technology. However, the practical deployment of LIS is constrained by the overwhelming computational complexity and power consumption required to process thousands [...] Read more.
The sixth generation (6G) of wireless networks demands extreme energy efficiency and massive connectivity, positioning large intelligent surfaces (LIS) as a pivotal technology. However, the practical deployment of LIS is constrained by the overwhelming computational complexity and power consumption required to process thousands of antenna elements. To address these challenges, this article proposes a deep learning-based receiver architecture that integrates the spatial efficiency of Partial LIS with advanced non-linear detection. By activating only a subset of antenna panels closest to the user terminal (Partial LIS), the system significantly reduces hardware overhead and Radio Frequency (RF) power consumption. To compensate for the performance loss, the multi-user interference (MUI) generated by the linear combining stage, and the increased MUI inherent in a reduced-aperture environment, a specialized Multilayer Perceptron (MLP) network is implemented. Unlike traditional Zero-Forcing (ZF) or Minimum Mean Squared Error (MMSE) receivers, which require energy-intensive matrix inversions for each frequency component, the proposed neural-network-enabled receiver achieves near-optimal performance using low-complexity combining followed by intelligent learning-based interference suppression. Simulation results demonstrate that the proposed hybrid architecture provides a scalable, “green” solution for 6G uplink scenarios. Notably, the deep learning approach is shown to effectively suppress the performance loss of reduced apertures, achieving a BER comparable to traditional linear benchmarks even with a reduced physical aperture, maintaining good Bit Error Rate (BER) performance while dramatically reducing the computational and hardware footprint. Full article
(This article belongs to the Special Issue Applications of Wireless and Mobile Communications, 2nd Edition)
Show Figures

Figure 1

26 pages, 1050 KB  
Article
New Relations on the Critical Line: Riemann Zeta Zeros, Divergent Series, and Infinite Numbers
by Emmanuel Thalassinakis
Mathematics 2026, 14(7), 1169; https://doi.org/10.3390/math14071169 - 1 Apr 2026
Viewed by 448
Abstract
In this work, a formal asymptotic framework based on infinite number expressions is employed to investigate structural relations associated with the Dirichlet representation of the Riemann zeta function. Within this framework, infinite number objects are interpreted through asymptotic representatives and serve as symbolic [...] Read more.
In this work, a formal asymptotic framework based on infinite number expressions is employed to investigate structural relations associated with the Dirichlet representation of the Riemann zeta function. Within this framework, infinite number objects are interpreted through asymptotic representatives and serve as symbolic encodings of asymptotic behavior in the regime x → ∞. A divergent real series is constructed from the sum of entries of an n × n matrix in the asymptotic limit n → ∞ and analyzed in relation to the squared modulus of a Dirichlet-type series. When the common parameter coincides with the imaginary part of a nontrivial zero of the Riemann zeta function on the critical line, the framework yields a structured cancellation mechanism, leading to parameter-dependent decay or convergence toward the constant −γ/2. Additional formal asymptotic relations are derived linking nontrivial zeros, divergent expressions, and the Euler–Mascheroni constant. The theoretical analysis is accompanied by numerical computations in double-precision arithmetic, which serve as consistency checks of the predicted asymptotic behavior. The proposed approach provides a coherent representative asymptotic methodology for organizing and analyzing identities involving divergent expressions arising in analytic number theory. The resulting relations are interpreted within this representative framework and are intended as structural asymptotic identities rather than classical equalities of divergent series. Full article
(This article belongs to the Special Issue Analytic Methods in Number Theory and Allied Fields)
Show Figures

Figure 1

23 pages, 21803 KB  
Article
Efficient 3D Inversion of the Marine Electrical-Source Time Domain Electromagnetic Method Based on the Footprint Technique
by Xianxiang Wang, Shanmei Li, Zefan Hu and Qing Sun
Geosciences 2026, 16(4), 142; https://doi.org/10.3390/geosciences16040142 - 1 Apr 2026
Viewed by 227
Abstract
Marine electric-source time domain electromagnetic (TDEM) surveys typically involve the simultaneous movement of transmitters and receivers, which generates a large number of transmitter–receiver pairs. This acquisition geometry creates notable challenges for 3D inversion, mainly because of the large data volume and high computational [...] Read more.
Marine electric-source time domain electromagnetic (TDEM) surveys typically involve the simultaneous movement of transmitters and receivers, which generates a large number of transmitter–receiver pairs. This acquisition geometry creates notable challenges for 3D inversion, mainly because of the large data volume and high computational cost. However, the electromagnetic “sensitive region” for each transmitter–receiver pair is much smaller than the full survey area. Based on this feature, we propose an efficient 3D inversion approach using the footprint technique. By clearly defining the sensitivity region, referred to as the footprint domain, for each pair, the method builds the sensitivity matrix only within localized subsurface regions that significantly affect the observed response. This approach greatly reduces both forward modeling cost and memory requirements. The forward modeling adopts an integral equation method combined with cosine transforms for fast 3D field computation, while the inversion framework uses a regularized conjugate-gradient algorithm, further accelerated by parallel computing under footprint domain constraints. Numerical simulations also examine the effects of offset, time channel, seawater thickness, and resistivity on the footprint domain, helping clarify the spatiotemporal diffusion behavior of TDEM fields in shallow marine environments. Tests on representative models show that the proposed method remains stable and accurate under complex geological conditions while significantly improving computational efficiency. In particular, the footprint domain technique improves inversion speed by about 55% compared with full domain inversion. These results indicate that the proposed approach provides a reliable and scalable option for large-scale 3D inversion of marine TDEM data. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

Back to TopTop