remotesensing-logo

Journal Browser

Journal Browser

Radar Techniques and Imaging Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (20 April 2023) | Viewed by 33602

Special Issue Editors


E-Mail Website
Guest Editor
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
Interests: imaging of several SAR modes; moving target detection; radar imaging; deep learning; ship detection; optical imaging

E-Mail Website
Guest Editor
State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China
Interests: SAR/ISAR imaging; InSAR signal processing; millimeter waves radar
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
Interests: multichannel SAR imaging

Special Issue Information

Dear Colleagues,

Due to its characteristics of all-day, all-weather, and long-distance imaging, radar imaging has important applications in civil fields, such as land monitoring, farmland mapping, ocean observation, and disaster rescue, as well as in military fields, such as battlefield reconnaissance and military movement monitoring. With the advancement of radar technology and the promotion of application requirements, in the imaging radar systems of airborne, space-borne, and other platforms, researchers' attention to high spatial resolution radar imaging of natural and man-made targets in imaging scenes has exploded. In recent years, the imaging radar system is also developing towards the trend of diversified platforms, comprehensive imaging modes, and advanced working systems. The radar data with multi-source and multi-dimensional information fused have requirements for synthetic aperture imaging algorithms of various platforms. Meanwhile, artificial intelligence techniques, such as machine learning, have been applied to remote sensing, using radar imaging to detect, identify, classify, and characterize targets, which aims to properly deal with the multi-dimensional and multi-source data in different applications.

The focus of this Special Issue is to report on the latest radar technology and imaging theory, as well as their new applications in a wider range of fields. It mainly includes (but is not limited to) the research of advanced radar technology, the latest imaging mechanism and imaging theory, and the detection, classification, recognition of targets of interest in radar images, and the acquisition and mining of image target information.

Contributions are welcome for the following topics (but are not limited to):

  • Novel imaging mechanism and imaging theory;
  • Active and passive imaging techniques;
  • Novel algorithms for radar target detection, classification, identification, and recognition;
  • Image quality and Information content assessment;
  • Image focusing and enhancement;
  • Typical applications, such as spaceborne, airborne, automotive, etc.;
  • SAR image processing in remote sensing;
  • Multi-source data fusion;
  • SAR autofocus/MoCo;
  • Artificial intelligence in radar applications.

Dr. Guang-Cai Sun
Dr. Gang Xu
Dr. Jianlai Chen
Dr. Jixiang Xiang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • new imaging mechanism and theory
  • active and passive imaging
  • advance radar techniques
  • target detection, classification and recognition
  • image focusing
  • SAR autofocus/MoCo
  • multi-source data fusion
  • artificial intelligence in radar applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 4765 KiB  
Article
Three-Dimensional Geometry Reconstruction Method from Multi-View ISAR Images Utilizing Deep Learning
by Zuobang Zhou, Xiangguo Jin, Lei Liu and Feng Zhou
Remote Sens. 2023, 15(7), 1882; https://doi.org/10.3390/rs15071882 - 31 Mar 2023
Cited by 6 | Viewed by 2301
Abstract
The three-dimensional (3D) geometry reconstruction method utilizing ISAR image sequence energy accumulation (ISEA) shows great performance on triaxial stabilized space targets but fails when there is unknown motion from the target itself. The orthogonal factorization method (OFM) can solve this problem well under [...] Read more.
The three-dimensional (3D) geometry reconstruction method utilizing ISAR image sequence energy accumulation (ISEA) shows great performance on triaxial stabilized space targets but fails when there is unknown motion from the target itself. The orthogonal factorization method (OFM) can solve this problem well under certain assumptions. However, due to the sparsity and anisotropy of ISAR images, the extraction and association of feature points become very complicated, resulting in the reconstructed geometry usually being a relatively sparse point cloud. Therefore, combining the advantages of the above methods, an extended factorization framework (EFF) is proposed. First, the instance segmentation method based on deep learning is used for the extraction and association of a number of key points on multi-view ISAR images. Then, the projection vectors between the 3-D geometry of the space target and the multi-view ISAR images are obtained, using the improved factorization method. Finally, the 3D geometry reconstruction problem is transformed into an unconstrained optimization problem and solved via the quantum-behaved particle swarm optimization (QPSO) method. The proposed framework uses discretely observed multi-view range–Doppler ISAR images as an input, which can make full use of the long-term data of space targets from multiple perspectives and which is non-sensitive to movement. Therefore, the proposed framework shows high feasibility in practical applications. Experiments on simulated and measured data show the effectiveness and robustness of the proposed framework. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Graphical abstract

14 pages, 7768 KiB  
Article
Fast Wideband Beamforming Using Convolutional Neural Network
by Xun Wu, Jie Luo, Guowei Li, Shurui Zhang and Weixing Sheng
Remote Sens. 2023, 15(3), 712; https://doi.org/10.3390/rs15030712 - 25 Jan 2023
Cited by 22 | Viewed by 3005
Abstract
With the wideband beamforming approaches, the synthetic aperture radar (SAR) could achieve high azimuth resolution and wide swath. However, the performance of conventional adaptive wideband time-domain beamforming is severely affected as the received signal snapshots are insufficient for adaptive approaches. In this paper, [...] Read more.
With the wideband beamforming approaches, the synthetic aperture radar (SAR) could achieve high azimuth resolution and wide swath. However, the performance of conventional adaptive wideband time-domain beamforming is severely affected as the received signal snapshots are insufficient for adaptive approaches. In this paper, a wideband beamformer using convolutional neural network (CNN) method, namely, frequency constraint wideband beamforming prediction network (WBPNet), is proposed to obtain a satisfactory performance in the circumstances of scanty snapshots. The proposed WBPNet successfully estimates the direction of arrival of interference with scanty snapshots and obtains the optimal weights with effectively null for the interference by utilizing the uniqueness of CNN to extract potential nonlinear features of input information. Meanwhile, the novel beamformer has an undistorted response to the wideband signal of interest. Compared with the conventional time-domain wideband beamforming algorithm, the proposed method can fast obtain adaptive weights because of using few snapshots. Moreover, the proposed WBPNet has a satisfactory performance on wideband beamforming with low computational complexity because it avoids the inverse operation of covariance matrix. Simulation results show the meliority and feasibility of the proposed approach. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

18 pages, 12713 KiB  
Article
Fusion of VNIR Optical and C-Band Polarimetric SAR Satellite Data for Accurate Detection of Temporal Changes in Vegetated Areas
by Luciano Alparone, Andrea Garzelli and Claudia Zoppetti
Remote Sens. 2023, 15(3), 638; https://doi.org/10.3390/rs15030638 - 21 Jan 2023
Cited by 7 | Viewed by 2756
Abstract
In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of [...] Read more.
In this paper, we propose a processing chain jointly employing Sentinel-1 and Sentinel-2 data, aiming to monitor changes in the status of the vegetation cover by integrating the four 10 m visible and near-infrared (VNIR) bands with the three red-edge (RE) bands of Sentinel-2. The latter approximately span the gap between red and NIR bands (700 nm–800 nm), with bandwidths of 15/20 nm and 20 m pixel spacing. The RE bands are sharpened to 10 m, following the hyper-sharpening protocol, which holds, unlike pansharpening, when the sharpening band is not unique. The resulting 10 m fusion product may be integrated with polarimetric features calculated from the Interferometric Wide (IW) Ground Range Detected (GRD) product of Sentinel-1, available at 10 m pixel spacing, before the fused data are analyzed for change detection. A key point of the proposed scheme is that the fusion of optical and synthetic aperture radar (SAR) data is accomplished at level of change, through modulation of the optical change feature, namely the difference in normalized area over (reflectance) curve (NAOC), calculated from the sharpened RE bands, by the polarimetric SAR change feature, achieved as the temporal ratio of polarimetric features, where the latter is the pixel ratio between the co-polar and the cross-polar channels. Hyper-sharpening of Sentinel-2 RE bands, calculation of NAOC and modulation-based integration of Sentinel-1 polarimetric change features are applied to multitemporal datasets acquired before and after a fire event, over Mount Serra, in Italy. The optical change feature captures variations in the content of chlorophyll. The polarimetric SAR temporal change feature describes depolarization effects and changes in volumetric scattering of canopies. Their fusion shows an increased ability to highlight changes in vegetation status. In a performance comparison achieved by means of receiver operating characteristic (ROC) curves, the proposed change feature-based fusion approach surpasses a traditional area-based approach and the normalized burned ratio (NBR) index, which is widespread in the detection of burnt vegetation. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

17 pages, 859 KiB  
Article
Cooperative Electromagnetic Data Annotation via Low-Rank Matrix Completion
by Wei Zhang, Jian Yang, Qiang Li, Jingran Lin, Huaizong Shao and Guomin Sun
Remote Sens. 2023, 15(1), 121; https://doi.org/10.3390/rs15010121 - 26 Dec 2022
Cited by 1 | Viewed by 1529
Abstract
Electromagnetic data annotation is one of the most important steps in many signal processing applications, e.g., radar signal deinterleaving and radar mode analysis. This work considers cooperative electromagnetic data annotation from multiple reconnaissance receivers/platforms. By exploiting the inherent correlation of the electromagnetic signal, [...] Read more.
Electromagnetic data annotation is one of the most important steps in many signal processing applications, e.g., radar signal deinterleaving and radar mode analysis. This work considers cooperative electromagnetic data annotation from multiple reconnaissance receivers/platforms. By exploiting the inherent correlation of the electromagnetic signal, as well as the correlation of the observations from multiple receivers, a low-rank matrix recovery formulation is proposed for the cooperative annotation problem. Specifically, considering the measured parameters of the same emitter should be roughly the same at different platforms, the cooperative annotation is modeled as a low-rank matrix recovery problem, which is solved iteratively either by the rank minimization method or the maximum-rank decomposition method. A comparison of the two methods, with the traditional annotation method on both the synthetic and real data, is given. Numerical experiments show that the proposed methods can effectively recover missing annotations and correct annotation errors. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

18 pages, 4593 KiB  
Article
Transmit Beampattern Design for Distributed Satellite Constellation Based on Space–Time–Frequency DoFs
by Xiaomin Tan, Chongdi Duan, Yu Li, Jinming Chen and Jianping An
Remote Sens. 2022, 14(23), 6181; https://doi.org/10.3390/rs14236181 - 6 Dec 2022
Cited by 2 | Viewed by 2111
Abstract
For distributed satellite constellations, detection performance can be equivalently regarded as a single large satellite by the cooperative operation of multiple small satellites, which is a promising research topic of the Next-Generation Radar (NGR) system. However, dense grating lobes inevitably occur in the [...] Read more.
For distributed satellite constellations, detection performance can be equivalently regarded as a single large satellite by the cooperative operation of multiple small satellites, which is a promising research topic of the Next-Generation Radar (NGR) system. However, dense grating lobes inevitably occur in the synthetic transmit pattern due to its distributed configuration, as a result of which the detection performance of dynamic coherent radar is seriously weakened. In this paper, a novel transmit beampattern optimization method for dynamic coherent radar based on a distributed satellite constellation is presented. Firstly, the effective coherent detection range interval is determined by several influence factors, i.e., coherent detection, far-field, and system link constraints. Then, we discuss the quantitative evaluation method for coherent integration in terms of synchronization error, beam pointing error, and high-speed motion characteristics and we allocate the corresponding terms in a reasonable way from the perspective of engineering. Finally, the space–time–frequency degrees of freedom (DOFs), which can be collected from satellite spacing, carrier frequencies, and platform motion characteristics, are utilized to realize a robust transmit beampattern with low sidelobe by invoking a genetic algorithm (GA). Simulation results validate the effectiveness of our theoretic analysis, and unambiguous coherent transmit beamforming with a satellite constellation of limited scale is accomplished. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Graphical abstract

30 pages, 11902 KiB  
Article
Urban Traffic Imaging Using Millimeter-Wave Radar
by Bo Yang, Hua Zhang, Yurong Chen, Yongjun Zhou and Yu Peng
Remote Sens. 2022, 14(21), 5416; https://doi.org/10.3390/rs14215416 - 28 Oct 2022
Cited by 9 | Viewed by 3225
Abstract
Imaging technology enhances radar environment awareness. Imaging radar can provide richer target information for traffic management systems than conventional traffic detection radar. However, there is still a lack of research on millimeter-wave radar imaging technology for urban traffic surveillance. To solve the above [...] Read more.
Imaging technology enhances radar environment awareness. Imaging radar can provide richer target information for traffic management systems than conventional traffic detection radar. However, there is still a lack of research on millimeter-wave radar imaging technology for urban traffic surveillance. To solve the above problem, we propose an improved three-dimensional FFT imaging algorithm architecture for radar roadside imaging in urban traffic scenarios, enabling the concurrence of dynamic and static targets imaging. Firstly, by analyzing the target characteristics and background noise in urban traffic scenes, the Monte-Carlo-based constant false alarm detection algorithm (MC-CFAR) and the improved MC-CFAR algorithm are proposed, respectively, for moving vehicles and static environmental targets detection. Then, for the velocity ambiguity solution problem with multiple targets and large velocity ambiguity cycles, an improved Hypothetical Phase Compensation algorithm (HPC-SNR) is proposed and complimented. Further, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to remove outliers to obtain a clean radar point cloud image. Finally, traffic targets within the 50 m range are presented as two-dimensional (2D) point cloud imaging. In addition, we also try to estimate the vehicle type by target point cloud size, and its accuracy reaches more than 80% in the vehicle sparse condition. The proposed method is verified by actual traffic scenario data collected by a millimeter-wave radar system installed on the roadside. The work can support further intelligent transportation management and extend radar imaging applications. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

21 pages, 3390 KiB  
Article
Machine-Learning-Based Framework for Coding Digital Receiving Array with Few RF Channels
by Lei Xiao, Yubing Han and Zuxin Weng
Remote Sens. 2022, 14(20), 5086; https://doi.org/10.3390/rs14205086 - 12 Oct 2022
Cited by 3 | Viewed by 2438
Abstract
A novel framework for a low-cost coding digital receiving array based on machine learning (ML-CDRA) is proposed in this paper. The received full-array signals are encoded into a few radio frequency (RF) channels, and decoded by an artificial neural network in real-time. The [...] Read more.
A novel framework for a low-cost coding digital receiving array based on machine learning (ML-CDRA) is proposed in this paper. The received full-array signals are encoded into a few radio frequency (RF) channels, and decoded by an artificial neural network in real-time. The encoding and decoding networks are studied in detail, including the implementation of the encoding network, the loss function and the complexity of the decoding network. A generalized form of loss function is presented by constraint with maximum likelihood, signal sparsity, and noise. Moreover, a feasible loss function is given as an example and the derivations for back propagation are successively derived. In addition, a real-time processing implementation architecture for ML-CDRA is presented based on the commercial chips. It is possible to implement by adding an additional FPGA on the hardware basis of full-channel DRA. ML-CDRA requires fewer RF channels than the traditional full-channel array, while maintaining a similar digital beamforming (DBF) performance. This provides a practical solution to the typical problems in the existing low-cost DBF systems, such as synchronization, moving target compensation, and being disabled at a low signal-to-noise ratio. The performance of ML-CDRA is evaluated in simulations. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Graphical abstract

23 pages, 48463 KiB  
Article
A Fast and Precise Plane Segmentation Framework for Indoor Point Clouds
by Yu Zhong, Dangjun Zhao, Dongyang Cheng, Junchao Zhang and Di Tian
Remote Sens. 2022, 14(15), 3519; https://doi.org/10.3390/rs14153519 - 22 Jul 2022
Cited by 6 | Viewed by 3033
Abstract
To improve the efficiency and accuracy of plane segmentation for indoor point clouds, this paper proposes a fast and precise plane segmentation framework which mainly consists of two steps: plane rough segmentation and precise segmentation. In the rough segmentation stage, the point clouds [...] Read more.
To improve the efficiency and accuracy of plane segmentation for indoor point clouds, this paper proposes a fast and precise plane segmentation framework which mainly consists of two steps: plane rough segmentation and precise segmentation. In the rough segmentation stage, the point clouds are firstly voxelized, then the original plane is extracted roughly according to the plane normal vector and nearest voxels conditions. Based on the results of rough segmentation, a further operation composed of downsampling and density-based spatial clustering of applications with noise (DBSCAN) is adopted to produce efficient and precise segmentation. Finally, to correct the over-segmentation, the distance and normal vector angle thresholds between planes are taken into consideration. The experimental results show that the proposed method improves the efficiency and accuracy of indoor point cloud plane segmentation, and the average intersection-over-union (IoU) achieves 0.8653. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Graphical abstract

20 pages, 6678 KiB  
Article
Modeling and Analysis of RFI Impacts on Imaging between Geosynchronous SAR and Low Earth Orbit SAR
by Xichao Dong, Yi Sui, Yuanhao Li, Zhiyang Chen and Cheng Hu
Remote Sens. 2022, 14(13), 3048; https://doi.org/10.3390/rs14133048 - 25 Jun 2022
Cited by 2 | Viewed by 2153
Abstract
Due to the short revisit time and large coverage of Geosynchronous synthetic aperture radars (GEO SARs) and the increasing number of low earth orbit synthetic aperture radar (LEO SAR) constellations, radio frequency interference (RFI) between GEO SARs and LEO SARs may occur, deteriorating [...] Read more.
Due to the short revisit time and large coverage of Geosynchronous synthetic aperture radars (GEO SARs) and the increasing number of low earth orbit synthetic aperture radar (LEO SAR) constellations, radio frequency interference (RFI) between GEO SARs and LEO SARs may occur, deteriorating the quality of SAR images. Traditional methods only simplify RFI to noise-like interference without considering the signal characteristics. In this paper, to accurately evaluate the impacts of GEO-to-LEO RFI and LEO-to-GEO RFI on imaging quantitatively, an RFI-impact quantitative analysis model is established. Taking account of the chirp signal form of SAR systems, the RFI power and image Signal-to-Interference-plus-Noise Ratio (SINR) are theoretically deduced and validated by numerical experiments. Based on the proposed method, the SAR image quality under different system parameters and bistatic configurations is estimated, and the probability of different configurations is also given. The results show that specular bistatic scattering RFI between GEO SARs and LEO SARs has serious effects on imaging, and the probability can approach 2% for certain orbital parameters and will become higher as LEO SAR constellations increase in the future, implying the necessity to suppress the RFI between the GEO SAR and the LEO SAR system. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Graphical abstract

Other

Jump to: Research

16 pages, 2940 KiB  
Technical Note
High-Resolution and Wide-Swath SAR Imaging with Space–Time Coding Array
by Kun Yu, Shengqi Zhu, Lan Lan and Biao Yang
Remote Sens. 2023, 15(9), 2465; https://doi.org/10.3390/rs15092465 - 8 May 2023
Cited by 4 | Viewed by 2201
Abstract
To achieve high-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) images, this paper focuses on resolving the problem of separating range-ambiguous echoes with the space–time coding (STC) array. At the modeling stage, the transmit elements and pulses of the STC array are configured [...] Read more.
To achieve high-resolution and wide-swath (HRWS) synthetic aperture radar (SAR) images, this paper focuses on resolving the problem of separating range-ambiguous echoes with the space–time coding (STC) array. At the modeling stage, the transmit elements and pulses of the STC array are configured with time delay and phase coding modulation, which introduces extra degrees of freedom (DOFs) in the transmit domain. To separate the echoes corresponding to different range-ambiguity regions, the equivalent transmit beamforming is performed in the two-dimensional space–frequency domain. Moreover, in order to compensate for the loss of range resolution during the beamforming progress, the frequency splicing method is proposed. At the analysis stage, the distributed target simulation is provided to demonstrate the effectiveness of obtaining HRWS SAR images in the STC radar. Additionally, the performance of resolving range ambiguity is compared with the traditional radar in terms of the range-ambiguity-to-signal ratio (RASR). Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

12 pages, 4004 KiB  
Technical Note
Coherent Multi-Dwell Processing of Un-Synchronized Dwells for High Velocity Estimation and Super-Resolution in Radar
by Benzion Levy, Lior Maman, Shlomi Shvartzman and Yosef Pinhasi
Remote Sens. 2023, 15(3), 782; https://doi.org/10.3390/rs15030782 - 30 Jan 2023
Viewed by 1845
Abstract
This paper describes a coherent multi-dwell processing (CMDP) method for high velocity estimation and super-resolution in search and track, while search (TWS) radar modes use an un-conventional signal processing algorithm that exploits multi-dwell transmissions. The existence of the multi-dwell waveform is necessary for [...] Read more.
This paper describes a coherent multi-dwell processing (CMDP) method for high velocity estimation and super-resolution in search and track, while search (TWS) radar modes use an un-conventional signal processing algorithm that exploits multi-dwell transmissions. The existence of the multi-dwell waveform is necessary for visibility needs by un-folding the target’s velocity and range ambiguity and is proposed to be utilized for high velocity estimation and super-resolution. In this paper, the proposed scheme is shown to result in improved velocity estimation and doppler resolution performance for un-ambiguous targets in comparison to classical radar processing. The processing concept uses the same transmitted waveform (WF) and time duration without the need to increase the time on target (TOT) through sophisticated coherent concatenation of the received dwells with velocity compensation between the dwells. The phase compensation in receive mode is implemented for each target according to its characteristics, which means that target velocities are estimated in each dwell separately. The notable result of the CMDP is the linear doppler resolution improvement obtained with the given search resources and without knowing the target characteristics in advance or the dwell delay time. Other possible benefits of this process are the ability to achieve larger detection ranges and high-angle measurement precisions in search mode due to the higher signal-to-noise ratio (SNR) of the extended dwell and the ability to track more targets due to efficient time and resource management. An outstanding opportunity to exploit the CMDP is by combining missions in phased array (PA) radars, meeting the multi-objective needs of both high spatial scan rates for illuminating the target and high doppler estimation and resolution performance. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

14 pages, 4811 KiB  
Technical Note
Extended Polar Format Algorithm (EPFA) for High-Resolution Highly Squinted SAR
by Ping Guo, Fuen Wu and Anyi Wang
Remote Sens. 2023, 15(2), 456; https://doi.org/10.3390/rs15020456 - 12 Jan 2023
Cited by 2 | Viewed by 2082
Abstract
The conventional polar format algorithm (CPFA) is widely used for synthetic aperture radar (SAR) because of its simple and efficient operations. However, due to its wavefront curvature assumption, the CPFA’s depth-of-focus (DOF) is extremely small, which greatly limits the scene size, especially for [...] Read more.
The conventional polar format algorithm (CPFA) is widely used for synthetic aperture radar (SAR) because of its simple and efficient operations. However, due to its wavefront curvature assumption, the CPFA’s depth-of-focus (DOF) is extremely small, which greatly limits the scene size, especially for high-resolution and highly squinted (HRHS) SAR. To solve this problem, an extended PFA (EPFA) is proposed in this study, re-deriving mapping functions by expanding the range history into slant- and cross-range components according to the forms of real data storage. This allows the full use of storage data, which the CPFA cannot achieve due to the large approximations introduced by the projection of echo data onto the ground. The wavefront curvature error is then analyzed and eliminated using a space-variant phase compensation function. Due to the high accuracy of expansion in the slant range plane and the space-variant correction processing, the EPFA has a larger DOF than the CPFA. The EPFA is also more suitable for undulating terrains since it avoids the projection of real data onto the ground plane performed in the CPFA. Using comparative analyses of simulated data and real-world images, the results suggest that the proposed EPFA achieves better focusing effects than the CPFA and is particularly useful for HRHS SAR. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

16 pages, 5202 KiB  
Technical Note
Ground-Based SAR Moving Target Refocusing Based on Relative Speed for Monitoring Mine Slopes
by Wenjie Shen, Shuo Wang, Yun Lin, Yang Li, Fan Ding and Yanping Wang
Remote Sens. 2022, 14(17), 4243; https://doi.org/10.3390/rs14174243 - 28 Aug 2022
Cited by 7 | Viewed by 2149
Abstract
Ground-based synthetic aperture radar (GBSAR) has the advantage of retrieving submillimeter deformation of the mine slope by using the differential interferometry technique, which is important for safe production in mining applications. However, the moving vehicle’s defocus/displaced signal will mask the SAR image of [...] Read more.
Ground-based synthetic aperture radar (GBSAR) has the advantage of retrieving submillimeter deformation of the mine slope by using the differential interferometry technique, which is important for safe production in mining applications. However, the moving vehicle’s defocus/displaced signal will mask the SAR image of the mining area which affects the accuracy of interference phase extraction and deformation inversion. In order to remove its influence, the moving target can first be refocused and then removed. To our knowledge, there is no GBSAR moving target refocusing method currently. Hence, the refocusing method is necessary. To solve the above problem, this paper proposes a single-channel FMCW-GBSAR moving target refocusing method based on relative speed. Firstly, the FMCW-GBSAR moving target signal model is analyzed, and then the relative speed based signal model is deduced. Based on the model and GBSAR’s feature of incomplete synthetic aperture, the Range Doppler (RD) algorithm is adopted and improved to achieve refocusing using relative speed parameters. The algorithm is controlled by relative speed and squint angle; thus, the refocused target image can be obtained via searching 2D parameters. The proposed method is verified by the synthetic data, which are generated by combining NCUT FMCW GBSAR real data and simulated moving target echo. Full article
(This article belongs to the Special Issue Radar Techniques and Imaging Applications)
Show Figures

Figure 1

Back to TopTop