Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,028)

Search Parameters:
Keywords = reconstruction algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2867 KB  
Article
SDO-YOLO: A Lightweight and Efficient Road Object Detection Algorithm Based on Improved YOLOv11
by Peng Ji and Zonglin Jiang
Appl. Sci. 2025, 15(21), 11344; https://doi.org/10.3390/app152111344 (registering DOI) - 22 Oct 2025
Abstract
Background: In the field of autonomous driving, existing object detection algorithms still face challenges such as excessive parameter counts and insufficient detection accuracy, particularly when handling dense targets, occlusions, distant small targets, and variable backgrounds in complex road scenarios, where balancing real-time performance [...] Read more.
Background: In the field of autonomous driving, existing object detection algorithms still face challenges such as excessive parameter counts and insufficient detection accuracy, particularly when handling dense targets, occlusions, distant small targets, and variable backgrounds in complex road scenarios, where balancing real-time performance and accuracy remains difficult. Methods: This study introduces the SDO-YOLO algorithm, an enhancement of YOLOv11n. First, to significantly reduce the parameter count while preserving feature representation capabilities, spatial-channel reconstruction convolution is employed to enhance the HGNetv2 network, streamlining redundant computations in feature extraction. Then, a large-kernel separable attention mechanism is introduced, decoupling two-dimensional convolutions into cascaded one-dimensional dilated convolutions, which expands the receptive field while reducing computational complexity. Next, to substantially improve detection accuracy, a reparameterized generalized feature pyramid network is constructed, incorporating CSPStage structures and dynamic channel regulation strategies to optimize multi-scale feature fusion efficiency during inference. Results: Evaluations on the KITTI dataset show that SDO-YOLO achieves a 2.8% increase in mAP@0.5 compared to the baseline, alongside reductions of 7.9% in parameters and 6.3% in computation. Generalization tests on BDD100K and UA-DETRAC datasets yield mAP@0.5 improvements of 1.9% and 3.7%, respectively, over the baseline. Conclusions: SDO-YOLO achieves improvements in both accuracy and efficiency, demonstrating strong robustness across diverse scenarios and adaptability across datasets. Full article
(This article belongs to the Special Issue AI in Object Detection)
22 pages, 2535 KB  
Article
Identification of Railway Vertical Track Alignment via the Unknown Input Observer
by Stefano Alfi, Matteo Santelia, Ivano La Paglia, Egidio Di Gialleonardo and Alan Facchinetti
Appl. Sci. 2025, 15(21), 11332; https://doi.org/10.3390/app152111332 (registering DOI) - 22 Oct 2025
Abstract
In this paper, a model-based approach for the identification of the railway vertical track alignment from simulation data is presented. The proposed methodology is based on the application of the unknown input observer algorithm. The model of a conventional train is used to [...] Read more.
In this paper, a model-based approach for the identification of the railway vertical track alignment from simulation data is presented. The proposed methodology is based on the application of the unknown input observer algorithm. The model of a conventional train is used to simulate the acceleration levels that vehicle-mounted sensors (e.g., on the bogies and carbody) would measure during operation. Simulations are carried out at a constant speed on both straight and curved tracks, including different types of track geometry components (namely longitudinal level, alignment, and cross-level) to assess the algorithm capability to identify the input irregularity. The primary focus is on the identification of mean vertical track alignment, a critical irregularity component for safety issues. In the analysed cases, the comparison between the measured and reconstructed signal histories are quite satisfactory, with maximum errors in the order of 15% and 29% along straight and curved tracks. Comparing the frequency content of the signals, a significantly higher degree of accuracy is observed (with maximum errors of 5–10% depending on the track layout), which demonstrates that the proposed methodology is suitable for track irregularity identification and monitoring purposes using an instrumented vehicle. Full article
(This article belongs to the Special Issue Railway Vehicle Dynamics)
32 pages, 3356 KB  
Article
An Accurate Method for Designing Piezoelectric Energy Harvesters Based on Two-Dimensional Green Functions Under a Tangential Line Force
by Jie Tong, Yang Zhang and Peng-Fei Hou
Energies 2025, 18(21), 5564; https://doi.org/10.3390/en18215564 (registering DOI) - 22 Oct 2025
Abstract
The piezoelectric coating structure constitutes the main configuration of contemporary energy harvesting systems, and its development requires accurate modeling of electromechanical coupling behavior under mechanical loads. The present work prepares a framework to analyze orthotropic piezoelectric coating–substrate systems; based on the fundamental solution [...] Read more.
The piezoelectric coating structure constitutes the main configuration of contemporary energy harvesting systems, and its development requires accurate modeling of electromechanical coupling behavior under mechanical loads. The present work prepares a framework to analyze orthotropic piezoelectric coating–substrate systems; based on the fundamental solution theory, it derives two-dimensional Green functions from closed-form elementary functions. The formulation can establish the mesh-free solution paradigm through addressing tangential line force loading onto a coated surface. This method helps reconstruct full-field electromechanical responses upon arbitrary mechanical loading by integrating superposition principles and Gaussian quadrature technologies. An important application is in optimizing coating thickness, where parametric research suggests that piezoelectric layer geometry is non-linearly correlated with energy conversion efficiency. Notably, analytical sensitivity coefficients of this framework contribute to gradient-based optimization algorithms, which enhances efficiency compared with traditional empirical frameworks. Full article
25 pages, 720 KB  
Article
Variational Bayesian Inference for a Q-Matrix-Free Hidden Markov Log-Linear Additive Cognitive Diagnostic Model
by Hao Duan, James Tang, Matthew J. Madison, Michael Cotterell and Minjeong Jeon
Algorithms 2025, 18(11), 675; https://doi.org/10.3390/a18110675 (registering DOI) - 22 Oct 2025
Abstract
Cognitive diagnostic models (CDMs) are commonly used in educational assessment to uncover the specific cognitive skills that contribute to student performance, allowing for precise identification of individual strengths and weaknesses and the design of targeted interventions. Traditional CDMs, however, depend heavily on a [...] Read more.
Cognitive diagnostic models (CDMs) are commonly used in educational assessment to uncover the specific cognitive skills that contribute to student performance, allowing for precise identification of individual strengths and weaknesses and the design of targeted interventions. Traditional CDMs, however, depend heavily on a predefined Q-matrix that specifies the relationship between test items and underlying attributes. In this study, we introduce a hidden Markov log-linear additive cognitive diagnostic model (HM-LACDM) that does not require a Q-matrix, making it suitable for analyzing longitudinal assessment data without prior structural assumptions. To support scalable applications, we develop a variational Bayesian inference (VI) algorithm that enables efficient estimation in large datasets. Additionally, we propose a method to reconstruct the Q-matrix from estimated item-effect parameters. The effectiveness of the proposed approach is demonstrated through simulation studies. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 1426 KB  
Article
A Novel DST-IV Efficient Parallel Implementation with Low Arithmetic Complexity
by Doru Florin Chiper and Dan Marius Dobrea
Electronics 2025, 14(21), 4137; https://doi.org/10.3390/electronics14214137 (registering DOI) - 22 Oct 2025
Abstract
Discrete sine transform (DST) has numerous applications across various fields, including signal processing, image compression and coding, adaptive digital filtering, mathematics (such as partial differential equations or numerical solutions of differential equations), image reconstruction, and classification, among others. The primary disadvantage of DST [...] Read more.
Discrete sine transform (DST) has numerous applications across various fields, including signal processing, image compression and coding, adaptive digital filtering, mathematics (such as partial differential equations or numerical solutions of differential equations), image reconstruction, and classification, among others. The primary disadvantage of DST class algorithms (DST-I, DST-II, DST-III, and DST-IV) is their substantial computational complexity (O (N log N)) during implementation. This paper proposes an innovative decomposition and real-time implementation for the DST-IV. This decomposition facilitates the execution of the algorithm in four or eight sections operating concurrently. These algorithms, which encompass 4 and 8 sections, are primarily developed using a matrix factorization technique to decompose the DST-IV matrices. Consequently, the computational complexity and execution time of the developed algorithms are markedly reduced compared to the traditional implementation of DST-IV, resulting in significant time efficiency. The performance analysis conducted on three distinct Graphics Processing Unit (GPU) architectures indicates that a substantial speedup can be achieved. An average speedup ranging from 22.42 to 65.25 was observed, depending on the GPU architecture employed and the DST-IV implementation (with 4 or 8 sections). Full article
Show Figures

Figure 1

15 pages, 2607 KB  
Article
Structural Health Monitoring of a Lamina in Unsteady Water Flow Using Modal Reconstruction Algorithms
by Gabriele Liuzzo, Stefano Meloni and Pierluigi Fanelli
Fluids 2025, 10(11), 276; https://doi.org/10.3390/fluids10110276 (registering DOI) - 22 Oct 2025
Abstract
Ensuring the structural integrity of mechanical components operating in fluid environments requires precise and reliable monitoring techniques. This study presents a methodology for reconstructing the full-field deformation of a flexible aluminium plate subjected to unsteady water flow in a water tunnel, using a [...] Read more.
Ensuring the structural integrity of mechanical components operating in fluid environments requires precise and reliable monitoring techniques. This study presents a methodology for reconstructing the full-field deformation of a flexible aluminium plate subjected to unsteady water flow in a water tunnel, using a structural modal reconstruction approach informed by experimental data. The experimental setup involves an aluminium lamina (200 mm × 400 mm × 2.5 mm) mounted in a closed-loop water tunnel and exposed to a controlled flow with velocities up to 0.5 m/s, corresponding to Reynolds numbers on the order of 104, inducing transient deformations captured through an image-based optical tracking technique. The core of the methodology lies in reconstructing the complete deformation field of the structure by combining a reduced number of vibration modes derived from the geometry and boundary conditions of the system. The novelty of the present work consists in the integration of the Internal Strain Potential Energy Criterion (ISPEC) for mode selection with a data-driven machine learning framework, enabling real-time identification of active modal contributions from sparse experimental measurements. This approach allows for an accurate estimation of the dynamic response while significantly reducing the required sensor data and computational effort. The experimental validation demonstrates strong agreement between reconstructed and measured deflections, with normalised errors below 15% and correlation coefficients exceeding 0.94, confirming the reliability of the reconstruction. The results confirm that, even under complex, time-varying fluid–structure interactions, it is possible to achieve accurate and robust deformation reconstruction with minimal computational cost. This integrated methodology provides a reliable and efficient basis for structural health monitoring of flexible components in hydraulic and marine environments, bridging the gap between sparse measurement data and full-field dynamic characterisation. Full article
Show Figures

Figure 1

24 pages, 9133 KB  
Article
Compound Fault Diagnosis of Hydraulic Pump Based on Underdetermined Blind Source Separation
by Xiang Wu, Pengfei Xu, Shanshan Song, Shuqing Zhang and Jianyu Wang
Machines 2025, 13(10), 971; https://doi.org/10.3390/machines13100971 - 21 Oct 2025
Abstract
The difficulty in precisely extracting single-fault signatures from hydraulic pump composite faults, which stems from structural complexity and coupled multi-source vibrations, is tackled herein via a new diagnostic technique based on underdetermined blind source separation (UBSS). Utilizing sparse component analysis (SCA), the proposed [...] Read more.
The difficulty in precisely extracting single-fault signatures from hydraulic pump composite faults, which stems from structural complexity and coupled multi-source vibrations, is tackled herein via a new diagnostic technique based on underdetermined blind source separation (UBSS). Utilizing sparse component analysis (SCA), the proposed method achieves blind source separation without relying on prior knowledge or multiple sensors. However, conventional SCA-based approaches are limited by their reliance on a predefined number of sources and their high sensitivity to noise. To overcome these limitations, an adaptive source number estimation strategy is proposed by integrating information–theoretic criteria into density peak clustering (DPC), enabling automatic source number determination with negligible additional computation. To facilitate this process, the short-time Fourier transform (STFT) is first employed to convert the vibration signals into the frequency domain. The resulting time–frequency points are then clustered using the integrated DPC–Bayesian Information Criterion (BIC) scheme, which jointly estimates both the number of sources and the mixing matrix. Finally, the original source signals are reconstructed through the minimum L1-norm optimization method. Simulation and experimental studies, including hydraulic pump composite fault experiments, verify that the proposed method can accurately separate mixed vibration signals and identify distinct fault components even under low signal-to-noise ratio (SNR) conditions. The results demonstrate the method’s superior separation accuracy, noise robustness, and adaptability compared with existing algorithms. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

21 pages, 9245 KB  
Article
Reconstruction of Building LIDAR Point Cloud Based on Geometric Primitive Constrained Optimization
by Haoyu Li, Tao Liu, Ruiqi Shen and Zhengling Lei
Appl. Sci. 2025, 15(20), 11286; https://doi.org/10.3390/app152011286 - 21 Oct 2025
Abstract
This study proposes a 3D reconstruction method for LIDAR building point clouds using geometric primitive constrained optimization. It addresses challenges such as low accuracy, high complexity, and slow modeling. This new algorithm studies the reconstruction of point clouds at the level of geometric [...] Read more.
This study proposes a 3D reconstruction method for LIDAR building point clouds using geometric primitive constrained optimization. It addresses challenges such as low accuracy, high complexity, and slow modeling. This new algorithm studies the reconstruction of point clouds at the level of geometric primitives and is an incremental joint optimization method based on the GPU rendering pipeline. Firstly, the building point cloud collected by the LIDAR laser scanner was preprocessed, and an initial building mesh model was constructed by the fast triangulation method. Secondly, based on the geometric characteristics of the building, geometric primitive constrained optimization rules were generated to optimize the initial mesh model (regular surface optimization, basis spline surface optimization, junction area optimization, etc.). And a view-dependent parallel algorithm was designed to optimize the calculation. Finally, the effectiveness of this approach was validated by comparing and analyzing the experimental results of different buildings’ point cloud data. This algorithm does not require data training and is suitable for outdoor surveying and mapping engineering operations. It has good controllability and adaptability, and the entire pipeline is interpretable. The obtained results can be used for serious applications, such as Building Information Modeling (BIM), Computer-Aided Design (CAD), etc. Full article
Show Figures

Figure 1

21 pages, 14072 KB  
Article
Workflow Analysis for CGH Generation with Speckle Reduction and Occlusion Culling Using GPU Acceleration
by Francisco J. Serón, Alfonso Blesa and Diego Sanz
Sensors 2025, 25(20), 6492; https://doi.org/10.3390/s25206492 - 21 Oct 2025
Abstract
Although GPUs are widely used in Computer-Generated Holography (CGH), their specific application to concrete problems such as occlusion or speckle filtering through temporal multiplexing is not yet standardized and has not been fully explored. This work aims to optimize the software architecture by [...] Read more.
Although GPUs are widely used in Computer-Generated Holography (CGH), their specific application to concrete problems such as occlusion or speckle filtering through temporal multiplexing is not yet standardized and has not been fully explored. This work aims to optimize the software architecture by taking the GPU architecture into account in a novel way for these particular tasks. We present an optimized algorithm for CGH computation that provides a joint solution to the problems of speckle noise and occlusion. The workflow includes the generation and illumination of a 3D scene, the calculation of the CGH including color, occlusion, and temporal speckle-noise filtering, followed by scene reconstruction through both simulation and experimental methods. The research focuses on implementing a temporal multiplexing technique that simultaneously performs speckle denoising and occlusion culling for point clouds, evaluating two types of occlusion that differ in whether the occlusion effect dominates over the depth effect in a scene stored in a CGH, while leveraging the parallel processing capabilities of GPUs to achieve a more immersive and high-quality visual experience. To this end, the total computational cost associated with generating color and occlusion CGHs is evaluated, quantifying the relative contribution of each factor. The results indicate that, under strict occlusion conditions, temporal multiplexing filtering does not significantly impact the overall computational cost of CGH calculation. Full article
(This article belongs to the Special Issue Digital Holography Imaging Techniques and Applications Using Sensors)
Show Figures

Figure 1

17 pages, 1718 KB  
Article
A Hybrid Model Combining Signal Decomposition and Inverted Transformer for Accurate Power Transformer Load Prediction
by Shuguo Gao, Chenmeng Xiang, Yanhao Zhou, Haoyu Liu, Lujian Dai, Tianyue Zhang and Yi Yin
Appl. Sci. 2025, 15(20), 11241; https://doi.org/10.3390/app152011241 - 20 Oct 2025
Abstract
Transformer load is a key factor influencing its aging and service life. Accurately predicting load trends is crucial for assisting load redistribution. This study proposes a hybrid model called RIME-VMD-TCN-iTransformer to forecast the trend of transformer load. In this model, RIME (Randomized Improved [...] Read more.
Transformer load is a key factor influencing its aging and service life. Accurately predicting load trends is crucial for assisting load redistribution. This study proposes a hybrid model called RIME-VMD-TCN-iTransformer to forecast the trend of transformer load. In this model, RIME (Randomized Improved Marine Predators Algorithm) is employed to enhance decomposition stability, VMD (Variational Mode Decomposition) is used to address the non-stationary characteristics of the load sequence, TCN (Temporal Convolutional Network) extracts local temporal dependencies, and iTransformer (Inverted Transformer) captures global inter-variable correlations. First, the variational mode decomposition algorithm is applied to mitigate the non-stationary characteristics of the signal, followed by the RIME to further enhance the orderliness of the intrinsic mode functions. Subsequently, the TCN-iTransformer model is utilized to predict each intrinsic mode function individually, and the prediction results of all intrinsic mode functions are reconstructed to obtain the final forecast. The findings indicate that the intrinsic mode functions obtained through RIME-VMD exhibit no spectral aliasing and can decompose abrupt time-series signals into stable and regular frequency components. Compared to other hybrid models, the proposed model demonstrates superior responsiveness to changes in time-series trends and achieves the lowest prediction error across various transformer capacity scenarios. These results highlight the model’s superior accuracy and generalization capability in handling abrupt signals, underscoring its potential for preventing unexpected transformer events. Full article
17 pages, 2622 KB  
Article
EvoFuzzy: Evolutionary Fuzzy Approach for Ensembling Reconstructed Genetic Networks
by Hasini Nakulugamuwa Gamage, Jaskaran Gill, Madhu Chetty, Suryani Lim and Jennifer Hallinan
BioMedInformatics 2025, 5(4), 59; https://doi.org/10.3390/biomedinformatics5040059 - 20 Oct 2025
Abstract
Background: Reconstructing gene regulatory networks (GRNs) from gene expression data remains a major challenge in systems biology due to the inherent complexity of biological systems and the limitations of existing reconstruction methods, which often yield high false-positive rates. This study aims to [...] Read more.
Background: Reconstructing gene regulatory networks (GRNs) from gene expression data remains a major challenge in systems biology due to the inherent complexity of biological systems and the limitations of existing reconstruction methods, which often yield high false-positive rates. This study aims to develop a robust and adaptive approach to enhance the accuracy of inferred GRNs by integrating multiple modelling paradigms. Methods: We introduce EvoFuzzy, a novel algorithm that integrates evolutionary computation and fuzzy logic to aggregate GRNs reconstructed using Boolean, regression, and fuzzy modelling techniques. The algorithm initializes an equal number of individuals from each modelling method to form a diverse population, which evolves through fuzzy trigonometric differential evolution. Gene expression values are predicted using a fuzzy logic-based predictor with confidence levels, and a fitness function is applied to identify the optimal consensus network. Results: The proposed method was evaluated using simulated benchmark datasets and a real-world SOS gene repair dataset. Experimental results demonstrated that EvoFuzzy consistently outperformed existing state-of-the-art GRN reconstruction methods in terms of accuracy and robustness. Conclusions: The fuzzy trigonometric differential evolution approach plays a pivotal role in refining and aggregating multiple network outputs into a single, optimal consensus network, making EvoFuzzy a powerful and reliable framework for reconstructing biologically meaningful gene regulatory networks. Full article
(This article belongs to the Topic Computational Intelligence and Bioinformatics (CIB))
Show Figures

Figure 1

26 pages, 4994 KB  
Article
Effect of Selected Parameters on Imaging Quality in Doppler Tomography
by Tomasz Świetlik and Krzysztof J. Opieliński
Appl. Sci. 2025, 15(20), 11214; https://doi.org/10.3390/app152011214 - 20 Oct 2025
Viewed by 27
Abstract
Doppler tomography (DT) is a relatively new method that allows the imaging of cross-sections of an object. The method uses a two-transducer ultrasound probe that moves around or along the object in a specific way. Image reconstruction is performed on the basis of [...] Read more.
Doppler tomography (DT) is a relatively new method that allows the imaging of cross-sections of an object. The method uses a two-transducer ultrasound probe that moves around or along the object in a specific way. Image reconstruction is performed on the basis of the detection of the so-called Doppler signal, which contains Doppler frequencies that identify the stationary heterogeneous structures inside the imaged cross-section of the object. The Doppler tomography method differs significantly from the popular blood flow velocity detection method and should not be confused with it. It can potentially be used to reconstruct 2D and 3D cross-sectional images of structures that reflect the ultrasound wave well, either in medicine for diagnostics or in industry for so-called non-destructive testing. This paper presents simulations of imaging using Doppler tomography. The method and algorithms that can be used for Doppler tomography imaging without the need for complicated measurement systems and calculations were proposed. The influence of selected parameters on DT imaging quality was investigated, and their optimal compromise values for specific conditions were determined. Ways to improve image quality were also discussed. Full article
Show Figures

Figure 1

23 pages, 1684 KB  
Article
Method of Accelerated Low-Frequency Oscillation Analysis in Low-Inertia Power Systems Based on Orthogonal Decomposition
by Mihail Senyuk, Svetlana Beryozkina, Ismoil Odinaev, Inga Zicmane and Murodbek Safaraliev
Electronics 2025, 14(20), 4105; https://doi.org/10.3390/electronics14204105 - 20 Oct 2025
Viewed by 47
Abstract
The peculiarity of the functioning of modern electric power systems, caused by the presence of renewable energy sources, flexible control devices based on power electronics, and the reduction of the reserve of the transmission capacity of the electric network, increases the relevance of [...] Read more.
The peculiarity of the functioning of modern electric power systems, caused by the presence of renewable energy sources, flexible control devices based on power electronics, and the reduction of the reserve of the transmission capacity of the electric network, increases the relevance of identifying and damping low-frequency oscillations (LFOs) of the electrical mode. This paper presents a comparative analysis of methods for estimating the parameters of low-frequency oscillations. Their applicability limits are shown as well as their peculiarity associated with low adaptability, and time costs in assessing the parameters of the electrical mode with low-frequency oscillations are revealed. A method for the accelerated evaluation of low-frequency oscillation parameters is proposed, the delay of which is ¼ of the oscillation cycle. The method was tested on both synthetic and physical signals. In the first case, the source of data was a four-machine mathematical model of a power system. In the second case, signals of transient processes occurring in a real power system were used as physical data. The accuracy of the proposed method was obtained by calculating the difference between the original and reconstructed signals. As a result, calculated error values were obtained, describing the accuracy and efficiency of the proposed method. The proposed algorithm for estimating LFO parameters displayed an error value not exceeding 0.8% for both synthetic and physical data. Full article
Show Figures

Figure 1

23 pages, 4804 KB  
Article
Particle Image Velocimetry Algorithm Based on Spike Camera Adaptive Integration
by Xiaoqiang Li, Changxu Wu, Yichao Wang, Hongyuan Li, Yuan Li, Tiejun Huang, Yuhao Huang and Pengyu Lv
Sensors 2025, 25(20), 6468; https://doi.org/10.3390/s25206468 - 19 Oct 2025
Viewed by 148
Abstract
In particle image velocimetry (PIV), overexposure is particularly common in regions with high illumination. In particular, strong scattering or background reflection at the liquid–gas interface will make the overexposure phenomenon more obvious, resulting in local pixel saturation, which will significantly reduce the particle [...] Read more.
In particle image velocimetry (PIV), overexposure is particularly common in regions with high illumination. In particular, strong scattering or background reflection at the liquid–gas interface will make the overexposure phenomenon more obvious, resulting in local pixel saturation, which will significantly reduce the particle image quality, and thus reduce the particle recognition rate and the accuracy of velocity field estimation. This study addresses the overexposure challenges in particle image velocimetry applications, mainly to address the challenge that the velocity field cannot be measured due to the difficulty in effectively detecting particles in the exposed area. In order to address the challenge of overexposure, this paper does not use traditional frame-based high-speed cameras, but instead proposes a particle image velocimetry algorithm based on adaptive integral spike camera data using a neuromorphic vision sensor (NVS). Specifically, by performing target-background segmentation on high-frequency digital spike signals, the method suppresses high illumination background regions and thus effectively mitigates overexposure. Then the spike data are further adaptively integrated based on both regional background illumination characteristics and the spike frequency features of particles with varying velocities, resulting in high signal-to-noise ratio (SNR) reconstructed particle images. Flow field computation is subsequently conducted using the reconstructed particle images, with validation through both simulation and experiment. In simulation, in the overexposed area, the average flow velocity estimation error of frame-based cameras is 8.594 times that of spike-based cameras. In the experiments, the spike camera successfully captured continuous high-density particle trajectories, yielding measurable and continuous velocity fields. Experimental results demonstrate that the proposed particle image velocimetry algorithm based on the adaptive integration of the spike camera effectively addresses overexposure challenges caused by high illumination of the liquid–gas interface in flow field measurements. Full article
Show Figures

Figure 1

29 pages, 10676 KB  
Article
Deep-DSO: Improving Mapping of Direct Sparse Odometry Using CNN-Based Single-Image Depth Estimation
by Erick P. Herrera-Granda, Juan C. Torres-Cantero, Israel D. Herrera-Granda, José F. Lucio-Naranjo, Andrés Rosales, Javier Revelo-Fuelagán and Diego H. Peluffo-Ordóñez
Mathematics 2025, 13(20), 3330; https://doi.org/10.3390/math13203330 - 19 Oct 2025
Viewed by 166
Abstract
In recent years, SLAM, visual odometry, and structure-from-motion approaches have widely addressed the problems of 3D reconstruction and ego-motion estimation. Of the many input modalities that can be used to solve these ill-posed problems, the pure visual alternative using a single monocular RGB [...] Read more.
In recent years, SLAM, visual odometry, and structure-from-motion approaches have widely addressed the problems of 3D reconstruction and ego-motion estimation. Of the many input modalities that can be used to solve these ill-posed problems, the pure visual alternative using a single monocular RGB camera has attracted the attention of multiple researchers due to its low cost and widespread availability in handheld devices. One of the best proposals currently available is the Direct Sparse Odometry (DSO) system, which has demonstrated the ability to accurately recover trajectories and depth maps using monocular sequences as the only source of information. Given the impressive advances in single-image depth estimation using neural networks, this work proposes an extension of the DSO system, named DeepDSO. DeepDSO effectively integrates the state-of-the-art NeW CRF neural network as a depth estimation module, providing depth prior information for each candidate point. This reduces the point search interval over the epipolar line. This integration improves the DSO algorithm’s depth point initialization and allows each proposed point to converge faster to its true depth. Experimentation carried out in the TUM-Mono dataset demonstrated that adding the neural network depth estimation module to the DSO pipeline significantly reduced rotation, translation, scale, start-segment alignment, end-segment alignment, and RMSE errors. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop