sensors-logo

Journal Browser

Journal Browser

Advanced Sensing and Signal Processing for Radar Imaging, Target Recognition and Object Detection

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Radar Sensors".

Deadline for manuscript submissions: closed (25 May 2024) | Viewed by 8795

Special Issue Editor

Key Laboratory of Electronic Information Countermeasure and Simulation Technology of Ministry of Education, Xidian University, Xi’an 710071, China
Interests: machine learning; signal processing; SAR target recognition; SAR image classification; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Radar plays an important role in information acquisition and has been widely used in military and civilian areas. However, with the development of advanced signal processing techniques, the existing algorithms have difficulty meeting the requirements of radar systems. This topic aims to explore the application of advanced technology in object detection, target recognition and imaging based on radar, and seeks novel ideas to improve radar performance. Articles on theoretical, application-oriented, and experimental studies with an emphasis on novelty in radar signal processing are considered. As a result, the aim of this new initiative is to bring together researchers working in the fields of radar and remote sensing. It is our desire to provide novel insights into the application of advanced signal processing techniques to radar systems.

Dr. Li Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 15276 KiB  
Article
PP-ISEA: An Efficient Algorithm for High-Resolution Three-Dimensional Geometry Reconstruction of Space Targets Using Limited Inverse Synthetic Aperture Radar Images
by Rundong Wang, Weigang Zhu, Chenxuan Li, Bakun Zhu and Hongfeng Pang
Sensors 2024, 24(11), 3550; https://doi.org/10.3390/s24113550 - 31 May 2024
Abstract
As the variety of space targets expands, two-dimensional (2D) ISAR images prove insufficient for target recognition, necessitating the extraction of three-dimensional (3D) information. The 3D geometry reconstruction method utilizing energy accumulation of ISAR image sequence (ISEA) facilitates superior reconstruction while circumventing the laborious [...] Read more.
As the variety of space targets expands, two-dimensional (2D) ISAR images prove insufficient for target recognition, necessitating the extraction of three-dimensional (3D) information. The 3D geometry reconstruction method utilizing energy accumulation of ISAR image sequence (ISEA) facilitates superior reconstruction while circumventing the laborious steps associated with factorization methods. Nevertheless, ISEA’s neglect of valid information necessitates a high quantity of images and elongated operation times. This paper introduces a partitioned parallel 3D reconstruction method utilizing sorted-energy semi-accumulation with ISAR image sequences (PP-ISEA) to address these limitations. The PP-ISEA innovatively incorporates a two-step search pattern—coarse and fine—that enhances search efficiency and conserves computational resources. It introduces a novel objective function ‘sorted-energy semi-accumulation’ to discern genuine scatterers from spurious ones and establishes a redundant point exclusion module. Experiments on the scatterer model and simulated electromagnetic model demonstrate that the PP-ISEA reduces the minimum image requirement from ten to four for high-quality scatterer model reconstruction, thereby offering superior reconstruction quality in less time. Full article
Show Figures

Figure 1

15 pages, 6238 KiB  
Article
Vehicle Occupant Detection Based on MM-Wave Radar
by Wei Li, Wenxu Wang and Hongzhi Wang
Sensors 2024, 24(11), 3334; https://doi.org/10.3390/s24113334 - 23 May 2024
Viewed by 243
Abstract
With the continuous development of automotive intelligence, vehicle occupant detection technology has received increasing attention. Despite various types of research in this field, a simple, reliable, and highly private detection method is lacking. This paper proposes a method for vehicle occupant detection using [...] Read more.
With the continuous development of automotive intelligence, vehicle occupant detection technology has received increasing attention. Despite various types of research in this field, a simple, reliable, and highly private detection method is lacking. This paper proposes a method for vehicle occupant detection using millimeter-wave radar. Specifically, the paper outlines the system design for vehicle occupant detection using millimeter-wave radar. By collecting the raw signals of FMCW radar and applying Range-FFT and DoA estimation algorithms, a range–azimuth heatmap was generated, visually depicting the current status of people inside the vehicle. Furthermore, utilizing the collected range–azimuth heatmap of passengers, this paper integrates the Faster R-CNN deep learning networks with radar signal processing to identify passenger information. Finally, to test the performance of the detection method proposed in this article, an experimental verification was conducted in a car and the results were compared with those of traditional machine learning algorithms. The findings indicated that the method employed in this experiment achieves higher accuracy, reaching approximately 99%. Full article
Show Figures

Figure 1

19 pages, 8691 KiB  
Article
Pedestrian Pose Recognition Based on Frequency-Modulated Continuous-Wave Radar with Meta-Learning
by Jiajia Shi, Qiang Zhang, Quan Shi, Liu Chu and Robin Braun
Sensors 2024, 24(9), 2932; https://doi.org/10.3390/s24092932 - 5 May 2024
Viewed by 524
Abstract
With the continuous advancement of autonomous driving and monitoring technologies, there is increasing attention on non-intrusive target monitoring and recognition. This paper proposes an ArcFace SE-attention model-agnostic meta-learning approach (AS-MAML) by integrating attention mechanisms into residual networks for pedestrian gait recognition using frequency-modulated [...] Read more.
With the continuous advancement of autonomous driving and monitoring technologies, there is increasing attention on non-intrusive target monitoring and recognition. This paper proposes an ArcFace SE-attention model-agnostic meta-learning approach (AS-MAML) by integrating attention mechanisms into residual networks for pedestrian gait recognition using frequency-modulated continuous-wave (FMCW) millimeter-wave radar through meta-learning. We enhance the feature extraction capability of the base network using channel attention mechanisms and integrate the additive angular margin loss function (ArcFace loss) into the inner loop of MAML to constrain inner loop optimization and improve radar discrimination. Then, this network is used to classify small-sample micro-Doppler images obtained from millimeter-wave radar as the data source for pose recognition. Experimental tests were conducted on pose estimation and image classification tasks. The results demonstrate significant detection and recognition performance, with an accuracy of 94.5%, accompanied by a 95% confidence interval. Additionally, on the open-source dataset DIAT-μRadHAR, which is specially processed to increase classification difficulty, the network achieves a classification accuracy of 85.9%. Full article
Show Figures

Figure 1

18 pages, 12985 KiB  
Article
An Aerial Image Detection Algorithm Based on Improved YOLOv5
by Dan Shan, Zhi Yang, Xiaofeng Wang, Xiangdong Meng and Guangwei Zhang
Sensors 2024, 24(8), 2619; https://doi.org/10.3390/s24082619 - 19 Apr 2024
Viewed by 617
Abstract
To enhance aerial image detection in complex environments characterized by multiple small targets and mutual occlusion, we propose an aerial target detection algorithm based on an improved version of YOLOv5 in this paper. Firstly, we employ an improved Mosaic algorithm to address redundant [...] Read more.
To enhance aerial image detection in complex environments characterized by multiple small targets and mutual occlusion, we propose an aerial target detection algorithm based on an improved version of YOLOv5 in this paper. Firstly, we employ an improved Mosaic algorithm to address redundant boundaries arising from varying image scales and to augment the training sample size, thereby enhancing detection accuracy. Secondly, we integrate the constructed hybrid attention module into the backbone network to enhance the model’s capability in extracting pertinent feature information. Subsequently, we incorporate feature fusion layer 7 and P2 fusion into the neck network, leading to a notable enhancement in the model’s capability to detect small targets. Finally, we replace the original PAN + FPN network structure with the optimized BiFPN (Bidirectional Feature Pyramid Network) to enable the model to preserve deeper semantic information, thereby enhancing detection capabilities for dense objects. Experimental results indicate a substantial improvement in both the detection accuracy and speed of the enhanced algorithm compared to its original version. It is noteworthy that the enhanced algorithm exhibits a markedly improved detection performance for aerial images, particularly under real-time conditions. Full article
Show Figures

Figure 1

30 pages, 10540 KiB  
Article
An On-Site InSAR Terrain Imaging Method with Unmanned Aerial Vehicles
by Hsu-Yueh Chuang and Jean-Fu Kiang
Sensors 2024, 24(7), 2287; https://doi.org/10.3390/s24072287 - 3 Apr 2024
Viewed by 535
Abstract
An on-site InSAR imaging method carried out with unmanned aerial vehicles (UAVs) is proposed to monitor terrain changes with high spatial resolution, short revisit time, and high flexibility. To survey and explore a specific area of interest in real time, a combination of [...] Read more.
An on-site InSAR imaging method carried out with unmanned aerial vehicles (UAVs) is proposed to monitor terrain changes with high spatial resolution, short revisit time, and high flexibility. To survey and explore a specific area of interest in real time, a combination of a least-square phase unwrapping technique and a mean filter for removing speckles is effective in reconstructing the terrain profile. The proposed method is validated by simulations on three scenarios scaled down from the high-resolution digital elevation models of the US geological survey (USGS) 3D elevation program (3DEP) datasets. The efficacy of the proposed method and the efficiency in CPU time are validated by comparing with several state-of-the-art techniques. Full article
Show Figures

Figure 1

18 pages, 8443 KiB  
Article
TRANS-CNN-Based Gesture Recognition for mmWave Radar
by Huafeng Zhang, Kang Liu, Yuanhui Zhang and Jihong Lin
Sensors 2024, 24(6), 1800; https://doi.org/10.3390/s24061800 - 11 Mar 2024
Viewed by 862
Abstract
In order to improve the real-time performance of gesture recognition by a micro-Doppler map of mmWave radar, the point cloud based gesture recognition for mmWave radar is proposed in this paper. Two steps are carried out for mmWave radar-based gesture recognition. The first [...] Read more.
In order to improve the real-time performance of gesture recognition by a micro-Doppler map of mmWave radar, the point cloud based gesture recognition for mmWave radar is proposed in this paper. Two steps are carried out for mmWave radar-based gesture recognition. The first step is to estimate the point cloud of the gestures by 3D-FFT and the peak grouping. The second step is to train the TRANS-CNN model by combining the multi-head self-attention and the 1D-convolutional network so as to extract the features in the point cloud data at a deeper level to categorize the gestures. In the experiments, TI mmWave radar sensor IWR1642 is used as a benchmark to evaluate the feasibility of the proposed approach. The results show that the accuracy of the gesture recognition reaches 98.5%. In order to prove the effectiveness of our approach, a simply 2Tx2Rx radar sensor is developed in our lab, and the accuracy of recognition reaches 97.1%. The results show that our proposed gesture recognition approach achieves the best performance in real time with limited training data in comparison with the existing methods. Full article
Show Figures

Figure 1

18 pages, 6014 KiB  
Article
Real-Time Moving Object Tracking on Smartphone Using Cradle Head Servo Motor
by Neunggyu Han, Sun Joo Ryu and Yunyoung Nam
Sensors 2024, 24(4), 1265; https://doi.org/10.3390/s24041265 - 16 Feb 2024
Viewed by 885
Abstract
The increasing demand for artificially intelligent smartphone cradles has prompted the need for real-time moving object detection. Real-time moving object tracking requires the development of algorithms for instant tracking analysis without delays. In particular, developing a system for smartphones should consider different operating [...] Read more.
The increasing demand for artificially intelligent smartphone cradles has prompted the need for real-time moving object detection. Real-time moving object tracking requires the development of algorithms for instant tracking analysis without delays. In particular, developing a system for smartphones should consider different operating systems and software development environments. Issues in current real-time moving object tracking systems arise when small and large objects coexist, causing the algorithm to prioritize larger objects or struggle with consistent tracking across varying scales. Fast object motion further complicates accurate tracking and leads to potential errors and misidentification. To address these issues, we propose a deep learning-based real-time moving object tracking system which provides an accuracy priority mode and a speed priority mode. The accuracy priority mode achieves a balance between the high accuracy and speed required in the smartphone environment. The speed priority mode optimizes the speed of inference to track fast-moving objects. The accuracy priority mode incorporates CSPNet with ResNet to maintain high accuracy, whereas the speed priority mode simplifies the complexity of the convolutional layer while maintaining accuracy. In our experiments, we evaluated both modes in terms of accuracy and speed. Full article
Show Figures

Figure 1

20 pages, 39748 KiB  
Article
Echo-Level SAR Imaging Simulation of Wakes Excited by a Submerged Body
by Yan Jia, Shuyi Liu, Yongqing Liu, Limin Zhai, Yifan Gong and Xiangkun Zhang
Sensors 2024, 24(4), 1094; https://doi.org/10.3390/s24041094 - 7 Feb 2024
Viewed by 547
Abstract
The paper introduces a numerical simulation method for Synthetic Aperture Radar (SAR) imaging of submerged body wakes by integrating hydrodynamics, electromagnetic scattering, and SAR imaging simulation. This work is helpful for better understanding SAR images of submerged body wakes. Among these, the hydrodynamic [...] Read more.
The paper introduces a numerical simulation method for Synthetic Aperture Radar (SAR) imaging of submerged body wakes by integrating hydrodynamics, electromagnetic scattering, and SAR imaging simulation. This work is helpful for better understanding SAR images of submerged body wakes. Among these, the hydrodynamic model consists of two sets of ocean dynamics closely related to SAR imaging, namely the wake of the submerged body and wind waves. For the wake, we simulated it using computational fluid dynamics (CFD) numerical methods. Furthermore, we compared and computed the electromagnetic scattering characteristics of wakes under various navigation parameters and sea surface conditions. Following that, based on the operational principles and imaging theory of synthetic aperture radar (SAR), we established the SAR raw echo signal of the wake. Employing a Range-Doppler (RD) algorithm, we generated simulated SAR images of the wake. The results indicate that utilizing Computational Fluid Dynamics (CFD) numerical methods enables the simulation of wake characteristics generated by the motion of a submerged body with different velocities. The backscattering features of wakes are closely associated with the relative orientation between the wake and the radar line of sight. Under specific wind speeds, the wake gets masked within the sea surface background, resulting in less discernible characteristics of the wake in SAR images. This suggests that at lower speeds of submerged body or under specific wind conditions, the detectability of the wake in SAR images significantly diminishes. Full article
Show Figures

Figure 1

24 pages, 14070 KiB  
Article
3D Road Boundary Extraction Based on Machine Learning Strategy Using LiDAR and Image-Derived MMS Point Clouds
by Baris Suleymanoglu, Metin Soycan and Charles Toth
Sensors 2024, 24(2), 503; https://doi.org/10.3390/s24020503 - 13 Jan 2024
Cited by 1 | Viewed by 1341
Abstract
The precise extraction of road boundaries is an essential task to obtain road infrastructure data that can support various applications, such as maintenance, autonomous driving, vehicle navigation, and the generation of high-definition maps (HD map). Despite promising outcomes in prior studies, challenges persist [...] Read more.
The precise extraction of road boundaries is an essential task to obtain road infrastructure data that can support various applications, such as maintenance, autonomous driving, vehicle navigation, and the generation of high-definition maps (HD map). Despite promising outcomes in prior studies, challenges persist in road extraction, particularly in discerning diverse road types. The proposed methodology integrates state-of-the-art techniques like DBSCAN and RANSAC, aiming to establish a universally applicable approach for diverse mobile mapping systems. This effort represents a pioneering step in extracting road information from image-based point cloud data. To assess the efficacy of the proposed method, we conducted experiments using a large-scale dataset acquired by two mobile mapping systems on the Yıldız Technical University campus; one system was configured as a mobile LiDAR system (MLS), while the other was equipped with cameras to operate as a photogrammetry-based mobile mapping system (MMS). Using manually measured reference road boundary data, we evaluated the completeness, correctness, and quality parameters of the road extraction performance of our proposed method based on two datasets. The completeness rates were 93.2% and 84.5%, while the correctness rates were 98.6% and 93.6%, respectively. The overall quality of the road curb extraction was 93.9% and 84.5% for the two datasets. Our proposed algorithm is capable of accurately extracting straight or curved road boundaries and curbs from complex point cloud data that includes vehicles, pedestrians, and other obstacles in urban environment. Furthermore, our experiments demonstrate that the algorithm can be applied to point cloud data acquired from different systems, such as MLS and MMS, with varying spatial resolutions and accuracy levels. Full article
Show Figures

Figure 1

14 pages, 4358 KiB  
Article
Meshless Search SR-STAP for Airborne Radar Based on Meta-Heuristic Algorithms
by Yunfei Hou, Yingnan Zhang, Wenzhu Gui, Di Wang and Wei Dong
Sensors 2023, 23(23), 9444; https://doi.org/10.3390/s23239444 - 27 Nov 2023
Viewed by 638
Abstract
The sparse recovery (SR) space-time adaptive processing (STAP) method has excellent clutter suppression performance under the condition of limited observation samples. However, when the cluttering is nonlinear in a spatial-Doppler profile, it will cause an off-grid effect and reduce the sparse recovery performance. [...] Read more.
The sparse recovery (SR) space-time adaptive processing (STAP) method has excellent clutter suppression performance under the condition of limited observation samples. However, when the cluttering is nonlinear in a spatial-Doppler profile, it will cause an off-grid effect and reduce the sparse recovery performance. A meshless search using a meta-heuristic algorithm (MH) can completely eliminate the off-grid effect in theory. Therefore, genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), and grey wolf optimization (GWO) methods are applied to SR-STAP for selecting exact clutter atoms in this paper. The simulation results show that MH-STAP can estimate the clutter subspace more accurately than the traditional algorithm; PSO-STAP and GWO-STAP showed better clutter suppression performance in four MH-STAP methods. To search for more accurate clutter atoms, PSO and GWO are combined to improve the method’s capacity for global optimization. Meanwhile, the fitness function is improved by using prior knowledge of the clutter distribution. The simulation results show that the improved PSO-GWO-STAP algorithm provides excellent clutter suppression performance, which solves the off-grid problem better than does single MH-STAP. Full article
Show Figures

Figure 1

15 pages, 9800 KiB  
Article
SMIFormer: Learning Spatial Feature Representation for 3D Object Detection from 4D Imaging Radar via Multi-View Interactive Transformers
by Weigang Shi, Ziming Zhu, Kezhi Zhang, Huanlei Chen, Zhuoping Yu and Yu Zhu
Sensors 2023, 23(23), 9429; https://doi.org/10.3390/s23239429 - 27 Nov 2023
Cited by 1 | Viewed by 1130
Abstract
4D millimeter wave (mmWave) imaging radar is a new type of vehicle sensor technology that is critical to autonomous driving systems due to its lower cost and robustness in complex weather. However, the sparseness and noise of point clouds are still the main [...] Read more.
4D millimeter wave (mmWave) imaging radar is a new type of vehicle sensor technology that is critical to autonomous driving systems due to its lower cost and robustness in complex weather. However, the sparseness and noise of point clouds are still the main problems restricting the practical application of 4D imaging radar. In this paper, we introduce SMIFormer, a multi-view feature fusion network framework based on 4D radar single-modal input. SMIFormer decouples the 3D point cloud scene into 3 independent but interrelated perspectives, including bird’s-eye view (BEV), front view (FV), and side view (SV), thereby better modeling the entire 3D scene and overcoming the shortcomings of insufficient feature representation capabilities under single-view built from extremely sparse point clouds. For multi-view features, we proposed multi-view feature interaction (MVI) to exploit the inner relationship between different views by integrating features from intra-view interaction and cross-view interaction. We evaluated the proposed SMIFormer on the View-of-Delft (VoD) dataset. The mAP of our method reached 48.77 and 71.13 in the fully annotated area and the driving corridor area, respectively. This shows that 4D radar has great development potential in the field of 3D object detection. Full article
Show Figures

Figure 1

25 pages, 5961 KiB  
Article
High-Resolution L-Band TomoSAR Imaging on Forest Canopies with UAV Swarm to Detect Dielectric Constant Anomaly
by Hsu-Yueh Chuang and Jean-Fu Kiang
Sensors 2023, 23(19), 8335; https://doi.org/10.3390/s23198335 - 9 Oct 2023
Viewed by 816
Abstract
A rigorous TomoSAR imaging procedure is proposed to acquire high-resolution L-band images of a forest in a local area of interest. A focusing function is derived to relate the backscattered signals to the reflectivity function of the forest canopies without resorting to calibration. [...] Read more.
A rigorous TomoSAR imaging procedure is proposed to acquire high-resolution L-band images of a forest in a local area of interest. A focusing function is derived to relate the backscattered signals to the reflectivity function of the forest canopies without resorting to calibration. A forest voxel model is compiled to simulate different tree species, with the dielectric constant modeled with the Maxwell-Garnett mixing formula. Five different inverse methods are applied on two forest scenarios under three signal-to-noise ratios in the simulations to validate the efficacy of the proposed procedure. The dielectric-constant profile of trees can be used to monitor the moisture content of the forest. The use of a swarm of unmanned aerial vehicles (UAVs) is feasible to carry out TomoSAR imaging over a specific area to pinpoint potential spots of wildfire hazards. Full article
Show Figures

Graphical abstract

Back to TopTop