Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,281)

Search Parameters:
Keywords = fusion estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6984 KiB  
Article
Winter Wheat Canopy Height Estimation Based on the Fusion of LiDAR and Multispectral Data
by Hao Ma, Yarui Liu, Shijie Jiang, Yan Zhao, Ce Yang, Xiaofei An, Kai Zhang and Hongwei Cui
Agronomy 2025, 15(5), 1094; https://doi.org/10.3390/agronomy15051094 - 29 Apr 2025
Viewed by 87
Abstract
Wheat canopy height is an important parameter for monitoring growth status. Accurately predicting the wheat canopy height can improve field management efficiency and optimize fertilization and irrigation. Changes in the growth characteristics of wheat at different growth stages affect the canopy structure, leading [...] Read more.
Wheat canopy height is an important parameter for monitoring growth status. Accurately predicting the wheat canopy height can improve field management efficiency and optimize fertilization and irrigation. Changes in the growth characteristics of wheat at different growth stages affect the canopy structure, leading to changes in the quality of the LiDAR point cloud (e.g., lower density, more noise points). Multispectral data can capture these changes in the crop canopy and provide more information about the growth status of wheat. Therefore, a method is proposed that fuses LiDAR point cloud features and multispectral feature parameters to estimate the canopy height of winter wheat. Low-altitude unmanned aerial systems (UASs) equipped with LiDAR and multispectral cameras were used to collect point cloud and multispectral data from experimental winter wheat fields during three key growth stages: green-up (GUS), jointing (JS), and booting (BS). Analysis of variance, variance inflation factor, and Pearson correlation analysis were employed to extract point cloud features and multispectral feature parameters significantly correlated with the canopy height. Four wheat canopy height estimation models were constructed based on the Optuna-optimized RF (OP-RF), Elastic Net regression, Extreme Gradient Boosting, and Support Vector Regression models. The model training results showed that the OP-RF model provided the best performance across all three growth stages of wheat. The coefficient of determination values were 0.921, 0.936, and 0.842 at the GUS, JS, and BS, respectively. The root mean square error values were 0.009 m, 0.016 m, and 0.015 m. The mean absolute error values were 0.006 m, 0.011 m, and 0.011 m, respectively. At the same time, it was obtained that the estimation results of fusing point cloud features and multispectral feature parameters were better than the estimation results of a single type of feature parameters. The results meet the requirements for canopy height prediction. These results demonstrate that the fusion of point cloud features and multispectral parameters can improve the accuracy of crop canopy height monitoring. The method provides a valuable method for the remote sensing monitoring of phenotypic information of low and densely planted crops and also provides important data support for crop growth assessment and field management. Full article
(This article belongs to the Collection Machine Learning in Digital Agriculture)
Show Figures

Figure 1

26 pages, 9869 KiB  
Article
CAGFNet: A Cross-Attention Image-Guided Fusion Network for Disparity Estimation of High-Resolution Satellite Stereo Images
by Qian Zhang, Jia Ge, Shufang Tian and Laidian Xi
Remote Sens. 2025, 17(9), 1572; https://doi.org/10.3390/rs17091572 - 28 Apr 2025
Viewed by 216
Abstract
Disparity estimation in high-resolution satellite stereo images is a critical task in remote sensing and photogrammetry. However, significant challenges arise due to the complexity of satellite stereo image scenes and the dynamic variations in disparities. Stereo matching becomes particularly difficult in areas with [...] Read more.
Disparity estimation in high-resolution satellite stereo images is a critical task in remote sensing and photogrammetry. However, significant challenges arise due to the complexity of satellite stereo image scenes and the dynamic variations in disparities. Stereo matching becomes particularly difficult in areas with textureless regions, repetitive patterns, disparity discontinuities, and occlusions. Recent advancements in deep learning have opened new research avenues for disparity estimation. This paper presents a novel end-to-end disparity estimation network designed to address these challenges through three key innovations: (1) a cross-attention mechanism for robust feature extraction, (2) an image-guided module that preserves geometric details, and (3) a 3D feature fusion module for context-aware disparity refinement. Experiments on the US3D dataset demonstrate State-of-the-Art performance, achieving an endpoint error (EPE) of 1.466 pixels (14.71% D1-error) on the Jacksonville subset and 0.996 pixels (10.53% D1-error) on the Omaha subset. The experimental results confirm that the proposed network excels in disparity estimation, exhibiting strong learning capability and robust generalization performance. Full article
Show Figures

Figure 1

14 pages, 488 KiB  
Article
A Theoretical Study of the Ionization States and Electrical Conductivity of Tantalum Plasma
by Shi Chen, Qishuo Zhang, Qianyi Feng, Ziyue Yu, Jingyi Mai, Hongping Zhang, Lili Huang, Chengjin Huang and Mu Li
Plasma 2025, 8(2), 16; https://doi.org/10.3390/plasma8020016 - 28 Apr 2025
Viewed by 140
Abstract
Tantalum is extensively used in inertial confinement fusion research for targets in radiation transport experiments, hohlraums in magnetized fusion experiments, and lining foams for hohlraums to suppress wall motions. To comprehend the physical processes associated with these applications, detailed information regarding the ionization [...] Read more.
Tantalum is extensively used in inertial confinement fusion research for targets in radiation transport experiments, hohlraums in magnetized fusion experiments, and lining foams for hohlraums to suppress wall motions. To comprehend the physical processes associated with these applications, detailed information regarding the ionization composition and electrical conductivity of tantalum plasma across a wide range of densities and temperatures is essential. In this study, we calculate the densities of ionization species and the electrical conductivity of partially ionized, nonideal tantalum plasma based on a simplified theoretical model that accounts for high ionization states up to the atomic number of the element and the lowering of ionization energies. A comparison of the ionization compositions between tantalum and copper plasmas highlights the significant role of ionization energies in determining species populations. Additionally, the average electron–neutral momentum transfer cross-section significantly influences the electrical conductivity calculations, and calibration with experimental measurements offers a method for estimating this atomic parameter. The impact of electrical conductivity in the intermediate-density range on the laser absorption coefficient is discussed using the Drude model. Full article
(This article belongs to the Special Issue Feature Papers in Plasma Sciences 2025)
Show Figures

Graphical abstract

17 pages, 10247 KiB  
Article
Pose Measurement of Non-Cooperative Space Targets Based on Point Line Feature Fusion in Low-Light Environments
by Haifeng Zhang, Jiaxin Wu, Han Ai, Delian Liu, Chao Mei and Maosen Xiao
Electronics 2025, 14(9), 1795; https://doi.org/10.3390/electronics14091795 - 28 Apr 2025
Viewed by 146
Abstract
Pose measurement of non-cooperative targets in space is one of the key technologies in space missions. However, most existing methods simulate well-lit environments and do not consider the degradation of algorithms in low-light conditions. Additionally, due to the limited computing capabilities of space [...] Read more.
Pose measurement of non-cooperative targets in space is one of the key technologies in space missions. However, most existing methods simulate well-lit environments and do not consider the degradation of algorithms in low-light conditions. Additionally, due to the limited computing capabilities of space platforms, there is a higher demand for real-time processing of algorithms. This paper proposes a real-time pose measurement method based on binocular vision that is suitable for low-light environments. Firstly, the traditional point feature extraction algorithm is adaptively improved based on lighting conditions, greatly reducing the impact of lighting on the effectiveness of feature point extraction. By combining point feature matching with epipolar constraints, the matching range of feature points is narrowed down to the epipolar line, significantly improving the matching speed and accuracy. Secondly, utilizing the structural information of the spacecraft, line features are introduced and processed in parallel with point features, greatly enhancing the accuracy of pose measurement results. Finally, an adaptive weighted multi-feature pose fusion method based on lighting conditions is introduced to obtain the optimal pose estimation results. Simulation and physical experiment results demonstrate that this method can obtain high-precision target pose information in a real-time and stable manner, both in well-lit and low-light environments. Full article
Show Figures

Figure 1

17 pages, 7946 KiB  
Article
Optical Camera Characterization for Feature-Based Navigation in Lunar Orbit
by Pierluigi Federici, Antonio Genova, Simone Andolfo, Martina Ciambellini, Riccardo Teodori and Tommaso Torrini
Aerospace 2025, 12(5), 374; https://doi.org/10.3390/aerospace12050374 - 26 Apr 2025
Viewed by 208
Abstract
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to [...] Read more.
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to the versatility and suitability of these data for onboard processing. Navigation frameworks based on optical data analysis have been developed to support semi- or fully-autonomous onboard systems, enabling precise relative localization. To achieve high-accuracy navigation, optical data have been combined with complementary measurements using sensor fusion techniques. Absolute localization is further supported by integrating onboard maps of cataloged surface features, enabling position estimation in an inertial reference frame. This study presents a navigation framework for optical image processing aimed at supporting the autonomous operations of lunar orbiters. The primary objective is a comprehensive characterization of the navigation camera’s properties and performance to ensure orbit determination uncertainties remain below 1% of the spacecraft altitude. In addition to an analysis of measurement noise, which accounts for both hardware and software contributions and is evaluated across multiple levels consistent with prior literature, this study emphasizes the impact of process noise on orbit determination accuracy. The mismodeling of orbital dynamics significantly degrades orbit estimation performance, even in scenarios involving high-performing navigation cameras. To evaluate the trade-off between measurement and process noise, representing the relative accuracy of the navigation camera and the onboard orbit propagator, numerical simulations were carried out in a synthetic lunar environment using a near-polar, low-altitude orbital configuration. Under nominal conditions, the optical measurement noise was set to 2.5 px, corresponding to a ground resolution of approximately 160 m based on the focal length, pixel pitch, and altitude of the modeled camera. With a conservative process noise model, position errors of about 200 m are observed in both transverse and normal directions. The results demonstrate the estimation framework’s robustness to modeling uncertainties, adaptability to varying measurement conditions, and potential to support increased onboard autonomy for small spacecraft in deep-space missions. Full article
(This article belongs to the Special Issue Planetary Exploration)
Show Figures

Figure 1

16 pages, 6406 KiB  
Article
A Shooting Distance Adaptive Crop Yield Estimation Method Based on Multi-Modal Fusion
by Dan Xu, Ba Li, Guanyun Xi, Shusheng Wang, Lei Xu and Juncheng Ma
Agronomy 2025, 15(5), 1036; https://doi.org/10.3390/agronomy15051036 - 25 Apr 2025
Viewed by 212
Abstract
To address the low estimation accuracy of deep learning-based crop yield image recognition methods under untrained shooting distances, this study proposes a shooting distance adaptive crop yield estimation method by fusing RGB and depth image information through multi-modal data fusion. Taking strawberry fruit [...] Read more.
To address the low estimation accuracy of deep learning-based crop yield image recognition methods under untrained shooting distances, this study proposes a shooting distance adaptive crop yield estimation method by fusing RGB and depth image information through multi-modal data fusion. Taking strawberry fruit fresh weight as an example, RGB and depth image data of 348 strawberries were collected at nine heights ranging from 70 to 115 cm. First, based on RGB images and shooting height information, a single-modal crop yield estimation model was developed by training a convolutional neural network (CNN) after cropping strawberry fruit images using the relative area conversion method. Second, the height information was expanded into a data matrix matching the RGB image dimensions, and multi-modal fusion models were investigated through input-layer and output-layer fusion strategies. Finally, two additional approaches were explored: direct fusion of RGB and depth images, and extraction of average shooting height from depth images for estimation. The models were tested at two untrained heights (80 cm and 100 cm). Results showed that when using only RGB images and height information, the relative area conversion method achieved the highest accuracy, with R2 values of 0.9212 and 0.9304, normalized root mean square error (NRMSE) of 0.0866 and 0.0814, and mean absolute percentage error (MAPE) of 0.0696 and 0.0660 at the two untrained heights. By further incorporating depth data, the highest accuracy was achieved through input-layer fusion of RGB images with extracted average height from depth images, improving R2 to 0.9475 and 0.9384, reducing NRMSE to 0.0707 and 0.0766, and lowering MAPE to 0.0591 and 0.0610. Validation using a developed shooting distance adaptive crop yield estimation platform at two random heights yielded MAPE values of 0.0813 and 0.0593. This model enables adaptive crop yield estimation across varying shooting distances, significantly enhancing accuracy under untrained conditions. Full article
(This article belongs to the Special Issue Smart Farming Technologies for Sustainable Agriculture—2nd Edition)
Show Figures

Figure 1

18 pages, 15380 KiB  
Article
A High-Precision Method for Warehouse Material Level Monitoring Using Millimeter-Wave Radar and 3D Surface Reconstruction
by Wenxin Zhang and Yi Gu
Sensors 2025, 25(9), 2716; https://doi.org/10.3390/s25092716 - 25 Apr 2025
Viewed by 139
Abstract
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform [...] Read more.
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform (CZT) super-resolution processing to enhance spectral resolution and measurement accuracy. To improve grain surface identification, an anomalous signal correction method based on angle–range feature fusion is introduced, mitigating errors caused by weak reflections and multipath effects. The point cloud data acquired by the radar undergo denoising, smoothing, and enhancement using statistical filtering, Moving Least Squares (MLS) smoothing, and bicubic spline interpolation to ensure data continuity and accuracy. A Poisson Surface Reconstruction algorithm is then applied to generate a continuous 3D model of the grain heap. The vector triple product method is used to estimate grain volume. Experimental results show a reconstruction volume error within 3%, demonstrating the method’s accuracy, robustness, and adaptability. The reconstructed surface accurately represents grain heap geometry, making this approach well suited for real-time warehouse monitoring and providing reliable support for material balance and intelligent storage management. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

23 pages, 10943 KiB  
Article
An Enhanced Algorithm Based on Dual-Input Feature Fusion ShuffleNet for Synthetic Aperture Radar Operating Mode Recognition
by Haiying Wang, Wei Lu, Yingying Wu, Qunying Zhang, Xiaojun Liu and Guangyou Fang
Remote Sens. 2025, 17(9), 1523; https://doi.org/10.3390/rs17091523 - 25 Apr 2025
Viewed by 192
Abstract
Synthetic aperture radar (SAR) operating mode recognition plays a crucial role in SAR countermeasures and serves as the foundation for effective SAR interference. To address the limitations of current SAR operating mode recognition algorithms, such as low recognition rates, poor generalization, and limited [...] Read more.
Synthetic aperture radar (SAR) operating mode recognition plays a crucial role in SAR countermeasures and serves as the foundation for effective SAR interference. To address the limitations of current SAR operating mode recognition algorithms, such as low recognition rates, poor generalization, and limited engineering applicability under low signal-to-noise ratio (SNR) conditions, an enhanced algorithm named dual-input feature fusion ShuffleNet (DIFF-ShuffleNet) based on intercepted SAR signal data is proposed. First, the SAR signal is processed by combining pulse compression and time–frequency analysis technology to enhance anti-noise robustness. Then, an improved lightweight ShuffleNet architecture is designed to fuse range pulse compression (RPC) maps and azimuth time–frequency features, significantly improving recognition accuracy in low-SNR environments while maintaining practical deployability. Moreover, an improved coarse-to-fine search fractional Fourier transform (CFS-FRFT) algorithm is proposed to address the chirp rate estimation required for RPC. Simulations demonstrate that the proposed SAR operating mode recognition algorithm achieves over 95.00% recognition accuracy for SAR operating modes (stripmap, spotlight, sliding spotlight, and scan) at an SNR greater than −8 dB. Finally, four sets of measured SAR data are used to validate the algorithm’s effectiveness, with all recognition results being correct, demonstrating the algorithm’s practical applicability. Full article
Show Figures

Graphical abstract

14 pages, 2220 KiB  
Article
An Axial Compression Transformer for Efficient Human Pose Estimation
by Wen Tan, Haixiang Zhang and Xinyi Song
Appl. Sci. 2025, 15(9), 4746; https://doi.org/10.3390/app15094746 - 24 Apr 2025
Viewed by 140
Abstract
Transformer has a wide range of applications in human posture estimation. It can model the global dependence relationship of images through the self-attention mechanism to obtain key human body information. However, Transformer consumes a lot of computation. An axial compression pose transformer (ACPose) [...] Read more.
Transformer has a wide range of applications in human posture estimation. It can model the global dependence relationship of images through the self-attention mechanism to obtain key human body information. However, Transformer consumes a lot of computation. An axial compression pose transformer (ACPose) method is proposed to reduce part of the computational cost of Transformer by the axial compression of the input matrix, while maintaining the global receptive field by feature fusion. A Local Enhancement Module is constructed to avoid the loss of too much feature information in the compression process. In the COCO dataset experiment, there was a significant reduction in computational cost compared to those of state-of-the-art transformer-based algorithms. Full article
Show Figures

Figure 1

27 pages, 5151 KiB  
Review
Advancing Sparse Vegetation Monitoring in the Arctic and Antarctic: A Review of Satellite and UAV Remote Sensing, Machine Learning, and Sensor Fusion
by Arthur Platel, Juan Sandino, Justine Shaw, Barbara Bollard and Felipe Gonzalez
Remote Sens. 2025, 17(9), 1513; https://doi.org/10.3390/rs17091513 - 24 Apr 2025
Viewed by 220
Abstract
Polar vegetation is a critical component of global biodiversity and ecosystem health but is vulnerable to climate change and environmental disturbances. Analysing the spatial distribution, regional variations, and temporal dynamics of this vegetation is essential for implementing conservation efforts in these unique environments. [...] Read more.
Polar vegetation is a critical component of global biodiversity and ecosystem health but is vulnerable to climate change and environmental disturbances. Analysing the spatial distribution, regional variations, and temporal dynamics of this vegetation is essential for implementing conservation efforts in these unique environments. However, polar regions pose distinct challenges for remote sensing, including sparse vegetation, extreme weather, and frequent cloud cover. Advances in remote sensing technologies, including satellite platforms, uncrewed aerial vehicles (UAVs), and sensor fusion techniques, have improved vegetation monitoring capabilities. This review explores applications—including land cover mapping, vegetation health assessment, biomass estimation, and temporal monitoring—and the methods developed to address these needs. We also examine the role of spatial, spectral, and temporal resolution in improving monitoring accuracy and addressing polar-specific challenges. Sensors such as Red, Green, and Blue (RGB), multispectral, hyperspectral, Synthetic Aperture Radar (SAR), light detection and ranging (LiDAR), and thermal, as well as UAV and satellite platforms, are analysed for their roles in low-stature polar vegetation monitoring. We highlight the potential of sensor fusion and advanced machine learning techniques in overcoming traditional barriers, offering a path forward for enhanced monitoring. This paper highlights how advances in remote sensing enhance polar vegetation research and inform adaptive management strategies. Full article
Show Figures

Graphical abstract

16 pages, 8058 KiB  
Article
YOLO-BCD: A Lightweight Multi-Module Fusion Network for Real-Time Sheep Pose Estimation
by Chaojie Sun, Junguo Hu, Qingyue Wang, Chao Zhu, Lei Chen and Chunmei Shi
Sensors 2025, 25(9), 2687; https://doi.org/10.3390/s25092687 - 24 Apr 2025
Viewed by 192
Abstract
The real-time monitoring of animal postures through computer vision techniques has become essential for modern precision livestock management. To overcome the limitations of current behavioral analysis systems in balancing computational efficiency and detection accuracy, this study develops an optimized deep learning framework named [...] Read more.
The real-time monitoring of animal postures through computer vision techniques has become essential for modern precision livestock management. To overcome the limitations of current behavioral analysis systems in balancing computational efficiency and detection accuracy, this study develops an optimized deep learning framework named YOLOv8-BCD specifically designed for ovine posture recognition. The proposed architecture employs a multi-level lightweight design incorporating enhanced feature fusion mechanisms and spatial-channel attention modules, effectively improving detection performance in complex farm environments with occlusions and variable lighting. Our methodology introduces three technical innovations: (1) Adaptive multi-scale feature aggregation through bidirectional cross-layer connections. (2) Context-aware attention weighting for critical region emphasis. (3) Streamlined detection head optimization for resource-constrained devices. The experimental dataset comprises 1476 annotated images capturing three characteristic postures (standing, lying, and side lying) under practical farming conditions. Comparative evaluations demonstrate significant improvements over baseline models, achieving 91.7% recognition accuracy with 389 FPS processing speed while maintaining 19.2% parameter reduction and 32.1% lower computational load compared to standard YOLOv8. This efficient solution provides technical support for automated health monitoring in intensive livestock production systems, showing practical potential for large-scale agricultural applications requiring real-time behavioral analysis. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

19 pages, 4008 KiB  
Article
Relative Localization and Dynamic Tracking of Underwater Robots Based on 3D-AprilTag
by Guoqiang Tang, Tengfei Yang, Yan Yang, Qiang Zhao, Minyi Xu and Guangming Xie
J. Mar. Sci. Eng. 2025, 13(5), 833; https://doi.org/10.3390/jmse13050833 - 23 Apr 2025
Viewed by 224
Abstract
This paper presents a visual localization system for underwater robots, aimed at achieving high-precision relative positioning and dynamic target tracking. A 3D AprilTag reference structure is constructed using a cubic configuration, and a high-resolution camera module is integrated into the AUV for real-time [...] Read more.
This paper presents a visual localization system for underwater robots, aimed at achieving high-precision relative positioning and dynamic target tracking. A 3D AprilTag reference structure is constructed using a cubic configuration, and a high-resolution camera module is integrated into the AUV for real-time tag detection and pose decoding. By combining multi-face marker geometry with a fused state estimation strategy, the proposed method improves pose continuity and robustness during multi-tag transitions. To address pose estimation discontinuities caused by viewpoint changes and tag switching, we introduce a fusion-based observation-switching Kalman filter, which performs weighted integration of multiple tag observations based on relative distance, viewing angle, and detection confidence, ensuring smooth pose updates during tag transitions. The experimental results demonstrate that the system maintains stable pose estimation and trajectory continuity even under rapid viewpoint changes and frequent tag switches. These results validate the feasibility and applicability of the proposed method for underwater relative localization and tracking tasks. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 12327 KiB  
Article
Dynamic Deformation Analysis of Super High-Rise Buildings Based on GNSS and Accelerometer Fusion
by Xingxing Xiao, Houzeng Han, Jian Wang, Dong Li, Cai Chen and Lei Wang
Sensors 2025, 25(9), 2659; https://doi.org/10.3390/s25092659 - 23 Apr 2025
Viewed by 214
Abstract
To accurately capture the dynamic displacement of super-tall buildings under complex conditions, this study proposes a data fusion algorithm that integrates NRBO-FMD optimization with Adaptive Robust Kalman Filtering (ARKF). The NRBO-FMD method preprocesses GNSS and accelerometer data to mitigate GNSS multipath effects, unmodeled [...] Read more.
To accurately capture the dynamic displacement of super-tall buildings under complex conditions, this study proposes a data fusion algorithm that integrates NRBO-FMD optimization with Adaptive Robust Kalman Filtering (ARKF). The NRBO-FMD method preprocesses GNSS and accelerometer data to mitigate GNSS multipath effects, unmodeled errors, and high-frequency noise in accelerometer signals. Subsequently, ARKF fuses the preprocessed data to achieve high-precision displacement reconstruction. Numerical simulations under varying noise conditions validated the algorithm’s accuracy. Field experiments conducted on the Hairong Square Building in Changchun further demonstrated its effectiveness in estimating three-dimensional dynamic displacement. Key findings are as follows: (1) The NRBO-FMD algorithm significantly reduced noise while preserving essential signal characteristics. For GNSS data, the root mean square error (RMSE) was reduced to 0.7 mm for the 100 s dataset and 1.0 mm for the 200 s dataset, with corresponding signal-to-noise ratio (SNR) improvements of 3.0 dB and 6.0 dB. For accelerometer data, the RMSE was reduced to 3.0 mm (100 s) and 6.2 mm (200 s), with a 4.1 dB SNR gain. (2) The NRBO-FMD–ARKF fusion algorithm achieved high accuracy, with RMSE values of 0.7 mm (100 s) and 1.9 mm (200 s). Consistent PESD and POSD values demonstrated the algorithm’s long-term stability and effective suppression of irregular errors. (3) The algorithm successfully fused 1 Hz GNSS data with 100 Hz accelerometer data, overcoming the limitations of single-sensor approaches. The fusion yielded an RMSE of 3.6 mm, PESD of 2.6 mm, and POSD of 4.8 mm, demonstrating both precision and robustness. Spectral analysis revealed key dynamic response frequencies ranging from 0.003 to 0.314 Hz, facilitating natural frequency identification, structural stiffness tracking, and early-stage performance assessment. This method shows potential for improving the integration of GNSS and accelerometer data in structural health monitoring. Future work will focus on real-time and predictive displacement estimation to enhance monitoring responsiveness and early-warning capabilities. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 7284 KiB  
Article
Soybean Lodging Classification and Yield Prediction Using Multimodal UAV Data Fusion and Deep Learning
by Xingmei Xu, Yushi Fang, Guangyao Sun, Yong Zhang, Lei Wang, Chen Chen, Lisuo Ren, Lei Meng, Yinghui Li, Lijuan Qiu, Yan Guo, Helong Yu and Yuntao Ma
Remote Sens. 2025, 17(9), 1490; https://doi.org/10.3390/rs17091490 - 23 Apr 2025
Viewed by 340
Abstract
UAV remote sensing is widely used in the agricultural sector due to its non-destructive, rapid, and cost-effective advantages. This study utilized two years of field data with multisource fused imagery of soybeans to evaluate lodging conditions and investigate the impact of lodging grade [...] Read more.
UAV remote sensing is widely used in the agricultural sector due to its non-destructive, rapid, and cost-effective advantages. This study utilized two years of field data with multisource fused imagery of soybeans to evaluate lodging conditions and investigate the impact of lodging grade information on yield prediction. Unlike traditional approaches that build empirical lodging models using band reflectance, vegetation indices, and texture features, this research introduces a transfer learning framework. This framework employs a ResNet18 encoder to directly extract features from raw images, bypassing the complexity of manual feature extraction processes. To address the imbalance in the lodging dataset, the Synthetic Minority Over-sampling Technique (SMOTE) strategy was employed in the feature space to balance the training set. The findings reveal that deep learning effectively extracts meaningful features from UAV imagery, outperforming traditional methods in lodging grade classification across all growth stages. On the 65 days after emergence (DAE), lodging grade classification using ResNet18 features achieved the highest accuracy (Accuracy = 0.76, recall = 0.76, F1 score = 0.73), significantly exceeding the performance of traditional methods. However, classification accuracy was relatively low in plots with higher lodging grades (lodging grades = 3, 5, 7), with an accuracy of 0.42 and an F1 score of 0.56. After applying the SMOTE module to balance the samples, the classification accuracy in plots with higher lodging grades improved to 0.65, marking an increase of 54.76%. To improve accuracy in yield prediction, this study integrates lodging information with other features, such as canopy spectral reflectance, vegetation indices, and texture features, using two multimodal data fusion strategies: input-level fusion (ResNet-EF) and intermediate-level fusion (ResNet-MF). The findings reveal that the intermediate-level fusion strategy consistently outperforms input-level fusion in yield prediction accuracy across all growth stages. Specifically, the intermediate-level fusion model incorporating measured lodging grade information achieved the highest prediction accuracy on the 85 DAE (R2 = 0.65, RMSE = 529.56 kg/ha). Furthermore, when predicted lodging information was used, the model’s performance remained comparable to that of the measured lodging grades, underscoring the critical role of lodging factors in enhancing yield estimation accuracy. Full article
Show Figures

Figure 1

25 pages, 10128 KiB  
Article
Jitter Error Correction for the HaiYang-3A Satellite Based on Multi-Source Attitude Fusion
by Yanli Wang, Ronghao Zhang, Yizhang Xu, Xiangyu Zhang, Rongfan Dai and Shuying Jin
Remote Sens. 2025, 17(9), 1489; https://doi.org/10.3390/rs17091489 - 23 Apr 2025
Viewed by 184
Abstract
The periodic rotation of the Ocean Color and Temperature Scanner (OCTS) introduces jitter errors in the HaiYang-3A (HY-3A) satellite, leading to internal geometric distortion in optical imagery and significant registration errors in multispectral images. These issues severely influence the application value of the [...] Read more.
The periodic rotation of the Ocean Color and Temperature Scanner (OCTS) introduces jitter errors in the HaiYang-3A (HY-3A) satellite, leading to internal geometric distortion in optical imagery and significant registration errors in multispectral images. These issues severely influence the application value of the optical data. To achieve near real-time compensation, a novel jitter error estimation and correction method based on multi-source attitude data fusion is proposed in this paper. By fusing the measurement data from star sensors and gyroscopes, satellite attitude parameters containing jitter errors are precisely resolved. The jitter component of the attitude parameter is extracted using the fitting method with the optimal sliding window. Then, the jitter error model is established using the least square solution and spectral characteristics. Subsequently, using the imaging geometric model and stable resampling, the optical remote sensing image with jitter distortion is corrected. Experimental results reveal a jitter frequency of 0.187 Hz, matching the OCTS rotation period, with yaw, roll, and pitch amplitudes quantified as 0.905”, 0.468”, and 1.668”, respectively. The registration accuracy of the multispectral images from the Coastal Zone Imager improved from 0.568 to 0.350 pixels. The time complexity is low with the single-layer linear traversal structure. The proposed method can achieve on-orbit near real-time processing and provide accurate attitude parameters for on-orbit geometric processing of optical satellite image data. Full article
(This article belongs to the Special Issue Near Real-Time Remote Sensing Data and Its Geoscience Applications)
Show Figures

Figure 1

Back to TopTop