Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = millimeter-wave radar point cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4598 KB  
Article
A ST-ConvLSTM Network for 3D Human Keypoint Localization Using MmWave Radar
by Siyuan Wei, Huadong Wang, Yi Mo and Dongping Du
Sensors 2025, 25(18), 5857; https://doi.org/10.3390/s25185857 - 19 Sep 2025
Viewed by 239
Abstract
Accurate human keypoint localization in complex environments demands robust sensing and advanced modeling. In this article, we construct a ST-ConvLSTM network for 3D human keypoint estimation via millimeter-wave radar point clouds. The ST-ConvLSTM network processes multi-channel radar image inputs, generated from multi-frame fused [...] Read more.
Accurate human keypoint localization in complex environments demands robust sensing and advanced modeling. In this article, we construct a ST-ConvLSTM network for 3D human keypoint estimation via millimeter-wave radar point clouds. The ST-ConvLSTM network processes multi-channel radar image inputs, generated from multi-frame fused point clouds through parallel pathways. These pathways are engineered to extract rich spatiotemporal features from the sequential radar data. The extracted features are then fused and fed into fully connected layers for direct regression of 3D human keypoint coordinates. In order to achieve better network performance, a mmWave radar 3D human keypoint dataset (MRHKD) is built with a hybrid human motion annotation system (HMAS), in which a binocular camera is used to measure the human keypoint coordinates and a 60 GHz 4T4R radar is used to generate radar point clouds. Experimental results demonstrate that the proposed ST-ConvLSTM, leveraging its unique ability to model temporal dependencies and spatial patterns in radar imagery, achieves MAEs of 0.1075 m, 0.0633 m, and 0.1180 m in the horizontal, vertical, and depth directions. This significant improvement underscores the model’s enhanced posture recognition accuracy and keypoint localization capability in challenging conditions. Full article
(This article belongs to the Special Issue Advances in Multichannel Radar Systems)
Show Figures

Figure 1

21 pages, 7062 KB  
Article
Target Recognition Based on Millimeter-Wave-Sensed Point Cloud Using PointNet++ Model
by Xianxian He, Haiyu Ding, Rongyan Xi, Jing Dong, Jing Jin, Qixing Wang, Chunju Shao, Xiao Dong and Yunhua Zhang
Sensors 2025, 25(18), 5694; https://doi.org/10.3390/s25185694 - 12 Sep 2025
Viewed by 288
Abstract
During walking, the human lower limbs primarily support the body and drive forward motion, while the arms exhibit greater variability and flexibility without bearing such loads. In gait-based target recognition, collecting exhaustive arm-motion data for training is challenging, and unseen arm movements during [...] Read more.
During walking, the human lower limbs primarily support the body and drive forward motion, while the arms exhibit greater variability and flexibility without bearing such loads. In gait-based target recognition, collecting exhaustive arm-motion data for training is challenging, and unseen arm movements during testing may degrade the performance. This paper investigates the impact of arm movements on radar-based gait recognition and proposes a gait recognition method using extracted lower limb motion data to mitigate interference from different arm motions. Gait data is collected via a millimeter-wave radar sensor encompassing four kinds of common arm movements, including natural arm swings, object-holding states, and irregular arm motions, from 11 volunteers. Using extracted lower limb motion data, millimeter wave point-cloud gait datasets covering diverse arm motions are generated. Three gait recognition experiments are conducted for comparing the performances of our proposed method using only lower limb data and existing method using all limb data, both based the on PointNet++ model. And the experimental results show that our method consistently outperforms existing methods, with a 22.9-percent improvement in accuracy. Results also show that the proposed method can enhance feature extraction, accelerate convergence, and achieve higher accuracy, especially with limited samples, and the highest recognition accuracy reaches 96.9%. In addition, in unseen arm movement cases, our method significantly outperforms existing methods, demonstrating superior robustness and recognition accuracy. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

15 pages, 5996 KB  
Article
A High-Fidelity mmWave Radar Dataset for Privacy-Sensitive Human Pose Estimation
by Yuanzhi Su, Huiying (Cynthia) Hou, Haifeng Lan and Christina Zong-Hao Ma
Bioengineering 2025, 12(8), 891; https://doi.org/10.3390/bioengineering12080891 - 21 Aug 2025
Viewed by 849
Abstract
Human pose estimation (HPE) in privacy-sensitive environments such as healthcare facilities and smart homes demands non-visual sensing solutions. Millimeter-wave (mmWave) radar emerges as a promising alternative, yet its development is hindered by the scarcity of high-fidelity datasets with accurate annotations. This paper introduces [...] Read more.
Human pose estimation (HPE) in privacy-sensitive environments such as healthcare facilities and smart homes demands non-visual sensing solutions. Millimeter-wave (mmWave) radar emerges as a promising alternative, yet its development is hindered by the scarcity of high-fidelity datasets with accurate annotations. This paper introduces mmFree-Pose, the first dedicated mmWave radar dataset specifically designed for privacy-preserving HPE. Collected through a novel visual-free framework that synchronizes mmWave radar with VDSuit-Full motion-capture sensors, our dataset covers 10+ actions, from basic gestures to complex falls. Each sample provides (i) raw 3D point clouds with Doppler velocity and intensity, (ii) precise 23-joint skeletal annotations, and (iii) full-body motion sequences in privacy-critical scenarios. Crucially, all data is captured without the use of visual sensors, ensuring fundamental privacy protection by design. Unlike conventional approaches that rely on RGB or depth cameras, our framework eliminates the risk of visual data leakage while maintaining high annotation fidelity. The dataset also incorporates scenarios involving occlusions, different viewing angles, and multiple subject variations to enhance generalization in real-world applications. By providing a high-quality and privacy-compliant dataset, mmFree-Pose bridges the gap between RF sensing and home monitoring applications, where safeguarding personal identity and behavior remains a critical concern. Full article
(This article belongs to the Special Issue Biomechanics and Motion Analysis)
Show Figures

Figure 1

15 pages, 2944 KB  
Article
Fruit Orchard Canopy Recognition and Extraction of Characteristics Based on Millimeter-Wave Radar
by Yinlong Jiang, Jieli Duan, Yang Li, Jiaxiang Yu, Zhou Yang and Xing Xu
Agriculture 2025, 15(13), 1342; https://doi.org/10.3390/agriculture15131342 - 22 Jun 2025
Viewed by 655
Abstract
Fruit orchard canopy recognition and characteristic extraction are the key problems faced in orchard precision production. To this end, we built a fruit tree canopy detection platform based on millimeter-wave radar, verified the feasibility of millimeter-wave radar from the two perspectives of fruit [...] Read more.
Fruit orchard canopy recognition and characteristic extraction are the key problems faced in orchard precision production. To this end, we built a fruit tree canopy detection platform based on millimeter-wave radar, verified the feasibility of millimeter-wave radar from the two perspectives of fruit orchard canopy recognition and canopy characteristic extraction, and explored the detection accuracy of millimeter-wave radar under spray conditions. For fruit orchard canopy recognition, based on the DBSCAN algorithm, an ellipsoid model adaptive clustering algorithm based on a variable-axis (E-DBSCAN) was proposed. The feasibility of the proposed algorithm was verified in the real operation scene of the orchard. The results show that the F1 score of the proposed algorithm was 96.7%, the precision rate was 93.5%, and the recall rate was 95.1%, which effectively improves the recognition accuracy of the classical DBSCAN algorithm in multi-density point cloud clustering. Regarding the extraction of the canopy characteristics of fruit trees, the RANSAC algorithm and coordinate method were used to extract crown width and plant height, respectively, and a point cloud density adaptive Alpha_shape algorithm was proposed to extract volume. The number of point clouds, crown width, plant height, and volume value under spray conditions and normal conditions were compared and analyzed. The average relative errors of crown width, plant height, and volume were 2.1%, 2.3%, and 4.2%, respectively, indicating that the spray had little effect on the extraction of canopy characteristics by millimeter-wave radar, which could inform spray-related decisions for precise applications. Full article
(This article belongs to the Special Issue Agricultural Machinery and Technology for Fruit Orchard Management)
Show Figures

Figure 1

19 pages, 13655 KB  
Article
Indoor mmWave Radar Ghost Suppression: Trajectory-Guided Spatiotemporal Point Cloud Learning
by Ruizhi Liu, Zhenhang Qin, Xinghui Song, Lei Yang, Yue Lin and Hongtao Xu
Sensors 2025, 25(11), 3377; https://doi.org/10.3390/s25113377 - 27 May 2025
Viewed by 1332
Abstract
Millimeter-wave (mmWave) radar is increasingly used in smart environments for human detection due to its rich sensing capabilities and sensitivity to subtle movements. However, indoor multipath propagation causes severe ghost target issues, reducing radar reliability. To address this, we propose a trajectory-based ghost [...] Read more.
Millimeter-wave (mmWave) radar is increasingly used in smart environments for human detection due to its rich sensing capabilities and sensitivity to subtle movements. However, indoor multipath propagation causes severe ghost target issues, reducing radar reliability. To address this, we propose a trajectory-based ghost suppression method that integrates multi-target tracking with point cloud deep learning. Our approach consists of four key steps: (1) point cloud pre-segmentation, (2) inter-frame trajectory tracking, (3) trajectory feature aggregation, and (4) feature broadcasting, effectively combining spatiotemporal information with point-level features. Experiments on an indoor dataset demonstrate its superior performance compared to existing methods, achieving 93.5% accuracy and 98.2% AUROC. Ablation studies demonstrate the importance of each component, particularly the complementary benefits of pre-segmentation and trajectory processing. Full article
(This article belongs to the Special Issue Radar Target Detection, Imaging and Recognition)
Show Figures

Figure 1

18 pages, 15380 KB  
Article
A High-Precision Method for Warehouse Material Level Monitoring Using Millimeter-Wave Radar and 3D Surface Reconstruction
by Wenxin Zhang and Yi Gu
Sensors 2025, 25(9), 2716; https://doi.org/10.3390/s25092716 - 25 Apr 2025
Viewed by 563
Abstract
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform [...] Read more.
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform (CZT) super-resolution processing to enhance spectral resolution and measurement accuracy. To improve grain surface identification, an anomalous signal correction method based on angle–range feature fusion is introduced, mitigating errors caused by weak reflections and multipath effects. The point cloud data acquired by the radar undergo denoising, smoothing, and enhancement using statistical filtering, Moving Least Squares (MLS) smoothing, and bicubic spline interpolation to ensure data continuity and accuracy. A Poisson Surface Reconstruction algorithm is then applied to generate a continuous 3D model of the grain heap. The vector triple product method is used to estimate grain volume. Experimental results show a reconstruction volume error within 3%, demonstrating the method’s accuracy, robustness, and adaptability. The reconstructed surface accurately represents grain heap geometry, making this approach well suited for real-time warehouse monitoring and providing reliable support for material balance and intelligent storage management. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

22 pages, 849 KB  
Article
Moving-Least-Squares-Enhanced 3D Object Detection for 4D Millimeter-Wave Radar
by Weigang Shi, Panpan Tong and Xin Bi
Remote Sens. 2025, 17(8), 1465; https://doi.org/10.3390/rs17081465 - 20 Apr 2025
Viewed by 1626
Abstract
Object detection is a critical task in autonomous driving. Currently, 3D object detection methods for autonomous driving primarily rely on stereo cameras and LiDAR, which are susceptible to adverse weather conditions and low lighting, resulting in limited robustness. In contrast, automotive mmWave radar [...] Read more.
Object detection is a critical task in autonomous driving. Currently, 3D object detection methods for autonomous driving primarily rely on stereo cameras and LiDAR, which are susceptible to adverse weather conditions and low lighting, resulting in limited robustness. In contrast, automotive mmWave radar offers advantages such as resilience to complex weather, independence from lighting conditions, and a low cost, making it a widely studied sensor type. Modern 4D millimeter-wave (mmWave) radar can provide spatial dimensions (x, y, z) as well as Doppler information, meeting the requirements for 3D object detection. However, the point cloud density of 4D mmWave radar is significantly lower than that of LiDAR in the case of short distances, and existing point cloud object detection methods struggle to adapt to such sparse data. To address this challenge, we propose a novel 4D mmWave radar point cloud object detection framework. First, we employ moving least squares (MLS) to densify multi-frame fused point clouds, effectively increasing the point cloud density. Next, we construct a 3D object detection network based on point pillar encoding and utilize an SSD detection head for detection on feature maps. Finally, we validate our method on the VoD dataset. Experimental results demonstrate that our proposed framework outperforms comparative methods, and the MLS-based point cloud densification method significantly enhances the object detection performance. Full article
Show Figures

Figure 1

24 pages, 12224 KB  
Article
Roadside Perception Applications Based on DCAM Fusion and Lightweight Millimeter-Wave Radar–Vision Integration
by Xiaoyu Yu, Tao Hu and Haozhen Zhu
Electronics 2025, 14(8), 1576; https://doi.org/10.3390/electronics14081576 - 13 Apr 2025
Cited by 1 | Viewed by 814
Abstract
With the advancement in intelligent transportation systems, single-sensor perception solutions face inherent limitations. To address the constraints of monocular vision detection, this study presents a vehicle road detection system that integrates millimeter-wave radar and visual information. By generating mask maps from millimeter-wave radar [...] Read more.
With the advancement in intelligent transportation systems, single-sensor perception solutions face inherent limitations. To address the constraints of monocular vision detection, this study presents a vehicle road detection system that integrates millimeter-wave radar and visual information. By generating mask maps from millimeter-wave radar point clouds, radar data transition from a global assistance role to localized guidance, identifying vehicle target positions within RGB images. These mask maps, along with RGB images, are processed by a Dual Cross-Attention Module (DCAM), where the fused features are fed into an enhanced YOLOv5 network, improving target localization accuracy. The proposed dual-input DCAM enables dynamic feature fusion, allowing the model to adjust its reliance on visual and radar data according to environmental conditions. To optimize the network architecture, ShuffleNetv2 replaces the YOLOv5 Backbone, while the Ghost Module is incorporated into the Neck, creating a lightweight design. Pruning techniques are applied to reduce model complexity, making it suitable for embedded applications and real-time detection scenarios. The experimental results demonstrate that this fusion scheme effectively improves vehicle detection accuracy and robustness compared to YOLOv5, with accuracy increasing from 59.4% to 67.2%. The number of parameters is reduced from 7.05 M to 2.52 M, providing a precise and reliable solution for intelligent transportation and roadside perception. Full article
Show Figures

Figure 1

15 pages, 1281 KB  
Article
Robust Human Tracking Using a 3D LiDAR and Point Cloud Projection for Human-Following Robots
by Sora Kitamoto, Yutaka Hiroi, Kenzaburo Miyawaki and Akinori Ito
Sensors 2025, 25(6), 1754; https://doi.org/10.3390/s25061754 - 12 Mar 2025
Viewed by 1795
Abstract
Human tracking is a fundamental technology for mobile robots that work with humans. Various devices are used to observe humans, such as cameras, RGB-D sensors, millimeter-wave radars, and laser range finders (LRF). Typical LRF measurements observe only the surroundings on a particular horizontal [...] Read more.
Human tracking is a fundamental technology for mobile robots that work with humans. Various devices are used to observe humans, such as cameras, RGB-D sensors, millimeter-wave radars, and laser range finders (LRF). Typical LRF measurements observe only the surroundings on a particular horizontal plane. Human recognition using an LRF has a low computational load and is suitable for mobile robots. However, it is vulnerable to variations in human height, potentially leading to detection failures for individuals taller or shorter than the standard height. This work aims to develop a method that is robust to height differences among humans using a 3D LiDAR. We observed the environment using a 3D LiDAR and projected the point cloud onto a single horizontal plane to apply a human-tracking method for 2D LRFs. We investigated the optimal height range of the point clouds for projection and found that using 30% of the point clouds from the top of the measured person provided the most stable tracking. The results of the path-following experiments revealed that the proposed method reduced the proportion of outlier points compared to projecting all the points (from 3.63% to 1.75%). As a result, the proposed method was effective in achieving robust human following. Full article
Show Figures

Figure 1

23 pages, 5392 KB  
Article
A Sliding Window-Based CNN-BiGRU Approach for Human Skeletal Pose Estimation Using mmWave Radar
by Yuquan Luo, Yuqiang He, Yaxin Li, Huaiqiang Liu, Jun Wang and Fei Gao
Sensors 2025, 25(4), 1070; https://doi.org/10.3390/s25041070 - 11 Feb 2025
Cited by 2 | Viewed by 1527
Abstract
In this paper, we present a low-cost, low-power millimeter-wave (mmWave) skeletal joint localization system. High-quality point cloud data are generated using the self-developed BHYY_MMW6044 59–64 GHz mmWave radar device. A sliding window mechanism is introduced to extend the single-frame point cloud into multi-frame [...] Read more.
In this paper, we present a low-cost, low-power millimeter-wave (mmWave) skeletal joint localization system. High-quality point cloud data are generated using the self-developed BHYY_MMW6044 59–64 GHz mmWave radar device. A sliding window mechanism is introduced to extend the single-frame point cloud into multi-frame time-series data, enabling the full utilization of temporal information. This is combined with convolutional neural networks (CNNs) for spatial feature extraction and a bidirectional gated recurrent unit (BiGRU) for temporal modeling. The proposed spatio-temporal information fusion framework for multi-frame point cloud data fully exploits spatio-temporal features, effectively alleviates the sparsity issue of radar point clouds, and significantly enhances the accuracy and robustness of pose estimation. Experimental results demonstrate that the proposed system accurately detects 25 skeletal joints, particularly improving the positioning accuracy of fine joints, such as the wrist, thumb, and fingertip, highlighting its potential for widespread application in human–computer interaction, intelligent monitoring, and motion analysis. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

21 pages, 6413 KB  
Article
Targetless Radar–Camera Extrinsic Parameter Calibration Using Track-to-Track Association
by Xinyu Liu, Zhenmiao Deng and Gui Zhang
Sensors 2025, 25(3), 949; https://doi.org/10.3390/s25030949 - 5 Feb 2025
Viewed by 2911
Abstract
One of the challenges in calibrating millimeter-wave radar and camera lies in the sparse semantic information of the radar point cloud, making it hard to extract environment features corresponding to the images. To overcome this problem, we propose a track association algorithm for [...] Read more.
One of the challenges in calibrating millimeter-wave radar and camera lies in the sparse semantic information of the radar point cloud, making it hard to extract environment features corresponding to the images. To overcome this problem, we propose a track association algorithm for heterogeneous sensors, to achieve targetless calibration between the radar and camera. Our algorithm extracts corresponding points from millimeter-wave radar and image coordinate systems by considering the association of tracks from different sensors, without any explicit target or prior for the extrinsic parameter. Then, perspective-n-point (PnP) and nonlinear optimization algorithms are applied to obtain the extrinsic parameter. In an outdoor experiment, our algorithm achieved a track association accuracy of 96.43% and an average reprojection error of 2.6649 pixels. On the CARRADA dataset, our calibration method yielded a reprojection error of 3.1613 pixels, an average rotation error of 0.8141°, and an average translation error of 0.0754 m. Furthermore, robustness tests demonstrated the effectiveness of our calibration algorithm in the presence of noise. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

19 pages, 4791 KB  
Article
Millimeter-Wave Radar Point Cloud Gesture Recognition Based on Multiscale Feature Extraction
by Wei Li, Zhiqi Guo and Zhuangzhi Han
Electronics 2025, 14(2), 371; https://doi.org/10.3390/electronics14020371 - 18 Jan 2025
Cited by 2 | Viewed by 1969
Abstract
A gesture recognition method is proposed in this paper, which leverages millimeter-wave radar point clouds, primarily for identifying six basic human gestures. First, the raw radar signals collected by the MIMO millimeter-wave radar are converted into 3D point cloud sequences using a microcontroller [...] Read more.
A gesture recognition method is proposed in this paper, which leverages millimeter-wave radar point clouds, primarily for identifying six basic human gestures. First, the raw radar signals collected by the MIMO millimeter-wave radar are converted into 3D point cloud sequences using a microcontroller integrated into the radar’s baseband processor. Next, based on the SequentialPointNet network, a multiscale feature extraction module is proposed in this paper, which enhances the network’s ability to extract local and global features through convolutional layers at different scales. This compensates for the lack of feature understanding capability caused by single-scale convolution kernels. Moreover, the CBAM in the network is replaced with GAM, which effectively enhances the extraction of global features by more precisely modeling global contextual information, thereby increasing the network’s focus on global features. A separable MLP structure is introduced into the network. The separable MLP operation is used to separately extract local point cloud features and neighborhood features, and then fuse these features, significantly improving the model’s performance. The effectiveness of the proposed method is confirmed through experiments, achieving a 99.5% accuracy in recognizing six fundamental human gestures, effectively distinguishing between gesture categories, and confirming the potential of millimeter-wave radar 3D point clouds in recognizing gestures. Full article
(This article belongs to the Special Issue Machine Learning for Radar and Communication Signal Processing)
Show Figures

Figure 1

18 pages, 28462 KB  
Article
Optimized Airborne Millimeter-Wave InSAR for Complex Mountain Terrain Mapping
by Futai Xie, Wei Wang, Xiaopeng Sun, Si Xie and Lideng Wei
Sensors 2025, 25(2), 424; https://doi.org/10.3390/s25020424 - 13 Jan 2025
Cited by 1 | Viewed by 1088
Abstract
The efficient acquisition and processing of large-scale terrain data has always been a focal point in the field of photogrammetry. Particularly in complex mountainous regions characterized by clouds, terrain, and airspace environments, the window for data collection is extremely limited. This paper investigates [...] Read more.
The efficient acquisition and processing of large-scale terrain data has always been a focal point in the field of photogrammetry. Particularly in complex mountainous regions characterized by clouds, terrain, and airspace environments, the window for data collection is extremely limited. This paper investigates the use of airborne millimeter-wave InSAR systems for efficient terrain mapping under such challenging conditions. The system’s potential for technical application is significant due to its minimal influence from cloud cover and its ability to acquire data in all-weather and all-day conditions. Focusing on the key factors in airborne InSAR data acquisition, this study explores advanced route planning and ground control measurement techniques. Leveraging radar observation geometry and global SRTM DEM data, we simulate layover and shadow effects to formulate an optimal flight path design. Additionally, the study examines methods to reduce synchronous ground control points in mountainous areas, thereby enhancing the rapid acquisition of terrain data. The results demonstrate that this approach not only significantly reduces field work and aviation costs but also ensures the accuracy of the mountain surface data generated by airborne millimeter-wave InSAR, offering substantial practical application value by reducing field work and aviation costs while maintaining data accuracy. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

18 pages, 4797 KB  
Article
A Fusion Localization System for Security Robots Based on Millimeter Wave Radar and Inertial Sensors
by Rui Zheng, Geng Sun and Fang Dong Li
Sensors 2024, 24(23), 7551; https://doi.org/10.3390/s24237551 - 26 Nov 2024
Cited by 1 | Viewed by 1133
Abstract
In smoggy and dusty environments, vision- and laser-based localization methods are not able to be used effectively for controlling the movement of a robot. Autonomous operation of a security robot can be achieved in such environments by using millimeter wave (MMW) radar for [...] Read more.
In smoggy and dusty environments, vision- and laser-based localization methods are not able to be used effectively for controlling the movement of a robot. Autonomous operation of a security robot can be achieved in such environments by using millimeter wave (MMW) radar for the localization system. In this study, an approximate center method under a sparse point cloud is proposed, and a security robot localization system based on millimeter wave radar is constructed. To improve the localization accuracy of the robot, inertial localization of the robot is integrated with MMW radar. Based on the concept of inertial localization, the state equation for the motion principle of the robot is deduced. According to principle of MMW localization, the measurement equation is derived, and a kinematics model of the robot is constructed. Further, by applying the Kalman filtering algorithm, a fusion localization system of the robot based on MMWs and inertial localization is proposed. The experimental results show that with iterations of the filtering algorithm, the gain matrix converges gradually, and the error of the fusion localization system decreases, leading to the stable operation of the robot. Compared to the localization system with only MMW radar, the average localization error is approximately reduced from 11 cm to 8 cm, indicating that the fusion localization system has better localization accuracy. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 46604 KB  
Article
Human Activity Recognition Based on Point Clouds from Millimeter-Wave Radar
by Seungchan Lim, Chaewoon Park, Seongjoo Lee and Yunho Jung
Appl. Sci. 2024, 14(22), 10764; https://doi.org/10.3390/app142210764 - 20 Nov 2024
Cited by 1 | Viewed by 2207
Abstract
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, [...] Read more.
Human activity recognition (HAR) technology is related to human safety and convenience, making it crucial for it to infer human activity accurately. Furthermore, it must consume low power at all times when detecting human activity and be inexpensive to operate. For this purpose, a low-power and lightweight design of the HAR system is essential. In this paper, we propose a low-power and lightweight HAR system using point-cloud data collected by radar. The proposed HAR system uses a pillar feature encoder that converts 3D point-cloud data into a 2D image and a classification network based on depth-wise separable convolution for lightweighting. The proposed classification network achieved an accuracy of 95.54%, with 25.77 M multiply–accumulate operations and 22.28 K network parameters implemented in a 32 bit floating-point format. This network achieved 94.79% accuracy with 4 bit quantization, which reduced memory usage to 12.5% compared to existing 32 bit format networks. In addition, we implemented a lightweight HAR system optimized for low-power design on a heterogeneous computing platform, a Zynq UltraScale+ ZCU104 device, through hardware–software implementation. It took 2.43 ms of execution time to perform one frame of HAR on the device and the system consumed 3.479 W of power when running. Full article
Show Figures

Figure 1

Back to TopTop