Next Article in Journal
Diurnal Variations of Infrared Land Surface Emissivity in the Taklimakan Desert: An Observational Analysis
Previous Article in Journal
Assessing the Impact of Mixed Pixel Proportion Training Data on SVM-Based Remote Sensing Classification: A Simulated Study
Previous Article in Special Issue
SAMNet++: A Segment Anything Model for Supervised 3D Point Cloud Semantic Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints

1
School of Electronic Information Engineering, Beihang University, Beijing 100191, China
2
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3
Key Laboratory of Technology in Geospatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China
4
School of Electronic Information Engineering, North China University of Technology, Beijing 100144, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(7), 1275; https://doi.org/10.3390/rs17071275
Submission received: 8 March 2025 / Revised: 31 March 2025 / Accepted: 1 April 2025 / Published: 3 April 2025
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)

Abstract

:
Circular synthetic aperture radar (CSAR) is an advanced imaging mechanism with three-dimensional (3D) imaging capability, enabling the acquisition of omnidirectional scattering information for observation regions. The existing 3D point cloud reconstruction method for single-pass CSAR is capable of obtaining the 3D scattering points for targets by inversely mapping the projection points in multi-aspect sub-aperture images and subsequently voting on the scattering candidates. However, due to the influence of non-target background points in multi-aspect sub-aperture images, there are several false points in the 3D reconstruction result, which affect the quality of the produced 3D point cloud. In this paper, we propose a novel 3D point cloud reconstruction method for single-pass CSAR based on inverse mapping with target contour constraints. The proposed method can constrain the range and height of inverse mapping by extracting the contour information of targets from multi-aspect sub-aperture CSAR images, which contributes to improving the reconstruction quality of 3D point clouds for targets. The performance of the proposed method was verified based on X-band CSAR measured data sets.

1. Introduction

Circular synthetic aperture radar (CSAR) is a new imaging mechanism, which can realize three-dimensional (3D) imaging and obtain omnidirectional scattering information through a single circular flight, improving the phenomenon of overlay and shadow in traditional SAR images [1]. Therefore, the 3D reconstruction of targets from CSAR images has become a research hotspot in SAR imaging. Several airborne CSAR flight experiments have verified the feasibility of CSAR in 3D target reconstruction [2,3].
As previously noted in reference [4], the full-aperture coherent 3D imaging method for single-pass CSAR can obtain high resolution for ideal isotropic targets, while it cannot obtain the desired resolution for anisotropic targets with limited azimuth persistence. Moreover, the high sidelobe caused by the donut-shaped spectrum will seriously affect the quality of 3D CSAR images [5]. The existing solutions include multi-pass CSAR full-aperture coherent 3D imaging [6], multi-pass holographic CSAR 3D imaging [7], and single-pass CSAR incoherent 3D imaging [8]. The multi-pass CSAR full-aperture coherent 3D imaging method can suppress the conical sidelobe in CSAR images by increasing the number of baselines along the elevation [9]. The multi-pass holographic CSAR 3D imaging method can obtain the high-quality 3D imaging result of the target through multi-pass sub-aperture tomography imaging and the incoherent fusion of multi-aspect 3D imaging results [10,11,12,13]. Due to the high precision requirement of interference baseline interval in multi-pass data acquisition, the complexity and cost of the above methods are relatively high. In recent years, circular array SAR has become the main equipment for holographic CSAR data acquisition [14]. Currently, the holographic CSAR 3D imaging method with circular InSAR data can obtain 3D CSAR images by only using two channels [15,16].
Compared with the above multi-pass CSAR 3D imaging methods, the single-pass CSAR incoherent 3D imaging method has a more flexible observation geometry and lower requirements on system parameters. The existing single-pass CSAR incoherent 3D imaging methods include the stereo radargrammetric method [17], the method with prior geometric models [18], and the shape recovery method [19]. The stereo radargrammetric method can realize the 3D reconstruction of the observation scene by intensively matching the feature points in stereo image pairs [20,21]. Due to the limited azimuth persistence of anisotropic targets, the matching points in adjacent sub-aperture images could not be found, which affects the quality of 3D reconstruction for targets. The method with prior geometric models can obtain the bottom contours of targets from CSAR images and deduce the top contour of the target from the layover with the prior model, forming the 3D contour points for targets [22]. As a consequence of the necessity to take prior information into account, this method can only apply to targets with an effective prior geometric model. The shape recovery method can infer the 3D shape of the target by using the scattering projection points of the target in the multi-aspect CSAR images. The existing shape recovery methods include the voxel area sculpturing-based (VAS) method [23], the extended neural radiation field (NeRF) method [24], and the inverse mapping and voting (IMV) method [25]. The VAS method can reconstruct the 3D convex of targets by counting the times that scattering points of the target appear in multi-aspect SAR images. The extended NeRF method can reconstruct the high-quality 3D structure of the observation scene by solving the four-dimensional radar scattering field. The IMV method can reconstruct the 3D point clouds of targets by inversely mapping the projection points in multi-aspect sub-aperture images to the 3D grid and selecting scattering points with the specific voting criteria. Compared with the VAS method and the extended NeRF method, the IMV method has lower computational complexity. However, due to the influence of non-target background points of complex scenes at multi-aspect angles, there are several false points in the 3D point clouds obtained by the IMV method, which affect the 3D reconstruction quality of targets.
The false points within the 3D point clouds for targets can be eliminated by constraining the parameter of inverse mapping, which can be calculated by using the contour information of targets in multi-aspect images. The feasibility of estimating the contour points of targets on the imaging plane from CSAR images has been demonstrated in previous research [26,27]. Therefore, we propose the 3D point cloud reconstruction method based on inverse mapping with target contour constraints for single-pass CSAR. First, multi-aspect sub-aperture CSAR images of targets are obtained by back-projection (BP) imaging, image registration, and low-rank sparse decomposition (LRSD). Then, the contour points of targets are extracted from multi-aspect images, which contribute to computing the range and height of inverse mapping. Ultimately, the scattering points of targets are selected from scattering candidates in multi-aspect angles according to the scattering probabilities of scattering candidates calculated by inverse mapping with target contour constraints, reconstructing a 3D point cloud with superior quality.
This paper is organized as follows. In Section 2, we analyze the principle of CSAR incoherent 3D imaging based on the geometry and signal model for CSAR imaging. Section 3 introduces the processing flow of the proposed method. The proposed method is validated by X-band CSAR measured data sets in Section 4. Section 5 presents a summary of the paper.

2. CSAR Incoherent 3D Imaging

Single-pass CSAR can achieve 3D imaging for its nonlinear synthetic aperture. Nevertheless, several scattering points of the target in the full-aperture coherent 3D imaging result for single-pass CSAR are obscured by high sidelobes, preventing the normal acquisition of 3D scattering information [7]. To address this issue, a single-pass CSAR incoherent 3D imaging strategy based on inverse mapping and voting is proposed in [25], which can reconstruct the 3D point clouds for targets without additional auxiliary information. In this section, we describe the geometry and signal model for CSAR imaging and analyze the basic principle of CSAR incoherent 3D imaging.

2.1. Geometry and Signal Model for CSAR Imaging

The CSAR imaging geometry for height targets is illustrated in Figure 1. The radar platform maintains a continuous observation of the target region along the circular flight path in side-looking observation mode. In Figure 1, θ is the azimuth of radar observation, R a θ is the track radius of the aircraft, and H is the flight altitude. P x p , y p , h is an arbitrary target with a height h above the observation plane. R p θ is the distance between the antenna phase center (APC) A x a θ , y a θ , H and the target P, which is written as
R p θ = x a θ x p 2 + y a θ y p 2 + H h 2 .
α θ is the complementary angle of the incident angle from APC A to target P, which can be expressed as
α θ = arcsin H h / R p θ .
Q x p , y p , 0 is the equidistant point of target P on the ground plane. R q θ is the distance from APC A to point Q. By the coordinates of point P and the geometric relationship between point P and point Q, the coordinates of point Q can be calculated as
x q = x p + r · cos θ y q = y p + r · sin θ ,
where r is the offset from point Q to the projection point P 0 of point P. The geometric transformation relationship between the offset r and the target height h is
r = h / tan β ,
where β is the angle between lines PQ and QP 0 . Points P and Q are equidistant points about APC, then β can be expressed as
β = π 2 Δ α 2 α ,
where α = mean α θ , and Δ α is the angle between lines AP and AQ,
Δ α = 2 arcsin h 2 R p ( θ ) sin β .
As shown in Equation (6), given the horizontal position of the target, the higher the target height, the wider the angle Δ α . Since the target height is generally positive, the value range of Δ α is from 2 arcsin h / 2 H to 2 arcsin h / 2 H h . Moreover, the angle Δ α is considered much smaller than the angle π / 2 α under the condition of Δ α π / 2 α / 10 . Therefore, when the target height and the flight altitude of the aircraft satisfy the condition of h H 1 + 10 / π / 2 α , the impact of Δ α can be ignored, i.e., β π / 2 α .
The linear frequency modulation (LFM) signal is employed as the transmitting signal, and the preprocessed echo signal is
S r θ , K = σ p · rect f B r exp j 2 K R p θ ,
where σ p represents the backscattering coefficient of the target, f represents the frequency variable in range, B r represents the bandwidth of the transmitted signal, K represents the wavenumber in range, K = 2 π f / c , and c is the propagation velocity of electromagnetic wave.
We use the BP algorithm to form the CSAR images of observation scenes. Here, the ground plane is taken as the imaging reference plane. The BP imaging result of the height target in the reference plane is expressed as
I x , y = σ · θ s θ e K min K max exp j 2 K R p θ d K · exp j 2 K c R θ d θ ,
where σ is the amplitude of the obtained CSAR image, θ s and θ e are the minimum and maximum values for the synthetic aperture angle, respectively. K min and K max are the minimum and maximum values for the range wavenumber, respectively. K c = K min + K max / 2 is the wavenumber center in range. R θ is the distance from the APC to any pixel x , y on the imaging plane, which can be expressed as
R θ = x a θ x 2 + y a θ y 2 + H 2 .

2.2. Principle of CSAR Incoherent 3D Imaging

The CSAR incoherent 3D imaging method can obtain the 3D scattering information of targets through the implementation of the two-dimensional (2D) coherent imaging for multi-aspect sub-aperture echo and the elevation incoherent imaging for the formed sub-aperture CSAR images. The principle of CSAR incoherent 3D imaging is described as follows. First, the CSAR echo signal in Equation (7) is divided into multi-aspect echo signals. The i t h segmented echo signal is written as S i θ i , K , and the corresponding azimuth center is
θ i c = i 1 / 2 · Δ θ ,
where Δ θ represents the range of sub-aperture angles, Δ θ = 2 π / N s , and N s is the sub-aperture number.
Subsequently, multi-aspect echo signals are processed by the BP algorithm to form CSAR sub-aperture images containing height targets. The i t h CSAR sub-aperture image is written as I i x , y . It can be seen from the CSAR imaging mechanism that the pixel values of height targets projected in the multi-aspect images can be obtained by the coherent accumulation of scattering points on the equidistant curve. Therefore, the i t h CSAR sub-aperture image is expressed as
I i x , y = z min z max σ x s , y s , z s , θ i c d z s ,
where σ x s , y s , z s , θ i c is the scattering coefficient of the scattering point x s , y s , z s on the equidistant curve in the direction of azimuth θ i c .
To analyze the distribution of height targets in the CSAR image, two sub-aperture tracks are selected, with azimuth centers θ 1 c and θ 2 c , respectively. Figure 2 illustrates the pixel distribution of point P in CSAR sub-aperture images. The green and blue curves represent the sub-aperture flight paths 1 and 2, respectively.
Figure 2 illustrates that the projection points Q 1 and Q 2 of the height point P exhibit offsets of length r 1 and r 2 , respectively, along the direction of azimuth θ 1 c and θ 2 c in the CSAR image. According to the geometric transformation relationship in Equation (4), the projection points in multi-aspect CSAR sub-aperture images can be inversely mapped to 3D space.
Next, the inverse mapping process is described by taking the projection point Q i of the height target in the i t h CSAR sub-aperture image as an example. The offset vector r i in the imaging plane of the projection point Q i about point P can be solved by substituting the elevation vector z into Equation (4). After substituting the coordinate of the projection point Q i , the offset vector r i , and the sub-aperture azimuth center θ i c into Equation (3), the coordinate vectors of the grid points on the equidistant curve can be calculated as
x i = x r i · cos θ i c y i = y r i · sin θ i c ,
where r i = z / tan β i , and β i is the angle between the vector PQ i and the ground plane.
The projection points of height targets within multi-aspect images are inversely mapped to the equidistant curve by the relationship in Equation (12) to obtain the scattering candidates in the direction of azimuth θ i c . Figure 3 shows the inverse mapping diagram of the projection points from multi-aspect CSAR sub-aperture images to 3D space. The green and blue curves represent the equidistant curves of projection points Q 1 and Q 2 during inverse mapping, respectively. The green and blue rings represent the scattering candidates on the equidistant curves corresponding to projection points Q 1 and Q 1 , respectively.
As illustrated in Figure 3, the scattering candidate Q 1 ( z ) in the azimuth angle θ 1 c intersects with the scattering candidate Q 2 ( z ) in the azimuth angle θ 2 c at the red point, which is the estimated position of height target P by using the inverse mapping method. After performing the inverse mapping on all projection points in multi-aspect CSAR sub-aperture images, the frequency of occurrence for scattering candidates at each grid point is determined, which is written as I p . Then, the scattering probability at each grid point is calculated as
P x , y , z = I p / N s ,
where N s represents the number of CSAR sub-aperture images.
According to the scattering probability in Equation (13) and the threshold T v , it is possible to ascertain whether a scattering point exists at a given 3D grid point. Here, the threshold T v is related to the scattering characteristic of the target. The preceding 3D reconstruction results indicate that when the threshold value is set to approximately 0.5, the reconstruction quality of 3D point clouds can be superior [23,25].
High-quality 3D point clouds can be acquired via the IMV method due to their wide scattering angle for ideal point targets with isotropic scattering characteristics. Nevertheless, the majority of artificial targets display anisotropic scattering characteristics. Due to the restricted azimuth persistence, the anisotropic target has projection points with strong scattering coefficients only in a few CSAR sub-aperture images. When reconstructing the 3D point cloud of artificial targets like buildings and vehicles, several false points occur in the 3D reconstruction results because of non-target background scattering points in multi-aspect CSAR sub-aperture images, which can significantly impair the reconstruction quality of 3D point clouds acquired via the IMV method. Therefore, we strive to enhance the quality of 3D point clouds for anisotropic targets in the following section.

3. 3D Point Cloud Reconstruction Method with Target Contour Constraints

In this section, a 3D point cloud reconstruction method based on inverse mapping with target contour constraints is proposed to address the issue of false points in 3D point clouds acquired through the IMV method. The proposed method calculates the range and height of inverse mapping by extracting the contour points of height targets in multi-aspect images to obtain high-quality 3D point clouds. Figure 4 shows the flowchart of the proposed method.
The processing steps of the proposed method based on inverse mapping with target contour constraints are as follows. First, CSAR sub-aperture images are obtained through the process of sub-aperture division and BP imaging, and then the sub-aperture image preprocessing is executed. Second, the pre-processed sub-aperture image is binarized to obtain the projection points and contour points of targets on the imaging plane. Then, the projection points of targets in multi-aspect images are inversely mapped to 3D space, under the constraints of inverse mapping range and height calculated with target contour information. Finally, the scattering probabilities of scattering candidates are calculated in the 3D grid, and the scattering points are then selected in multi-aspect angles to reconstruct the 3D point clouds of targets.

3.1. CSAR Sub-Aperture Image Preprocessing

After dividing the circular synthetic aperture into multiple short-curve apertures according to the CSAR data acquisition parameters, the 2D coherent imaging for multi-aspect echo data is conducted via the BP algorithm, resulting in CSAR sub-aperture images with nearly identical range and azimuth resolutions.
The varying motion errors associated with sub-aperture flight paths can result in geometric distortion and radiation distortion between multi-aspect sub-aperture images. While traditional image registration algorithms that rely on image intensity or gradient information can effectively correct geometric distortion, they are susceptible to nonlinear radiation distortion (NRD), which can impair the reliability of feature matching. The radiation-variation insensitive feature transform (RIFT) algorithm has the potential to enhance the reliability of feature detection by leveraging the phase consistency and to circumvent the constraints of gradient information for feature description by employing the maximum index map, thereby improving the robustness of NRD correction [28]. Accordingly, the geometric distortion and the radiation distortion between multi-aspect sub-aperture images are corrected by using the RIFT algorithm, which facilitates the conversion of these images to a uniform coordinate system.
It is acknowledged that background pixels near target pixels in multi-aspect images may potentially impact the reconstruction quality of 3D point clouds. Low-rank sparse decomposition (LRSD) of multi-aspect CSAR sub-aperture images is performed to separate target pixels and background pixels before inverse mapping. The projection points of the height target vary with the sub-aperture azimuth, whereas the inherent background points remain constant in multi-aspect sub-aperture images. According to the distribution of target points, background points, and speckle noise in the CSAR sub-aperture image, we construct the LRSD model for the CSAR sub-aperture image:
D = L + S + G , r a n k L r ,   c a r d S k ,
where D is the hybrid matrix formed by the splicing of adjacent sub-aperture images. L represents the background matrix independent of the aspect angle, satisfying the low-rank constraint. S represents the projection matrix of the height target varied with the aspect angle, which has the sparse characteristic. G represents the noise matrix in the sub-aperture image. r a n k L and c a r d S are the rank of the matrix L and the sparsity of the matrix S , respectively. r and k are the low-rank and sparse parameters, respectively.
According to the LRSD model in Equation (14), the problem of separating target pixels and background pixels can be transformed into the following optimization problem:
min L , S D L S F 2 , s . t .   r a n k L r ,   c a r d S k ,
where | | · | | F represents the F -norm of the matrix. The purpose of the optimization problem in Equation (14) is to minimize the extraction error of target pixels after LRSD.
Go-decomposition (GoDec) is a robust and efficient LSRD method in noisy cases [29]. We use the GoDec algorithm to resolve the above optimization problem, the processing step of which is as follows. Initially, the optimization problem in Equation (15) is transformed into two subproblems:
L t = arg min r a n k L r D L S t 1 F 2 S t = arg min c a r d S k D L t S F 2 ,
where L t and S t are the global solutions of the above subproblems, respectively.
Then, the low-rank and sparse approximations based on bilateral random projections (BRP) are used to solve the subproblems in Equation (16) alternately. The low-rank approximate matrix L t and the sparse approximate matrix S t of the hybrid matrix D are
L t = Q 1 R 1 A 2 T Y 1 1 R 2 T 1 2 q + 1 Q 2 T S t = P Ω D L t ,
respectively, where q is the power exponent used to adjust the estimation error of the low-rank matrix L t . Q 1 and R 1 are the QR decomposition results of the right random projection matrix Y 1 . Q 2 and R 2 are the QR decomposition results of the left random projection matrix Y 2 . The BRP matrices and Y 2 are constructed based on the random matrices and A 2 R r × n , respectively. m and n are the number of rows and columns for the hybrid matrix D , respectively. P Ω · represents the sampling projection of the matrix about the set Ω , which is the non-zero subset comprising the first k largest elements of the matrix D L t .
The sparse component of multi-aspect CSAR sub-aperture images can be solved by using the GoDec algorithm, which represents the projection points of targets in multi-aspect images. Subsequently, the 3D point clouds of height targets are reconstructed through the implementation of inverse mapping for the obtained projection points.

3.2. Inverse Mapping with Target Contour Constraints

For anisotropic targets with limited azimuth persistence, due to the influence of non-target background points in the CSAR sub-aperture image, false points will be generated outside the actual scattering points. This phenomenon occurs when the IMV method is used to reconstruct the 3D point clouds for targets. Therefore, it is necessary to consider the contour distribution of the anisotropic target in multi-aspect CSAR sub-aperture images, which can constrain the coordinate range of scattering candidates formed by inverse mapping, thereby improving the quality of 3D point clouds. The processing steps for inverse mapping with target contour constraints are as follows.
First, the projection points of targets in the imaging plane are extracted based on the preprocessed CSAR sub-aperture image. The pixel whose amplitude exceeds the threshold T b is designated as the projection point of the target and assigned a value of 1. The pixel whose amplitude is below the threshold T b is designated as the non-target background point and assigned a value of zero. For vehicle targets and building targets, an accurate projected image of the target can be obtained when the threshold T b is set to twice the average pixel value of the CSAR sub-aperture image. To reduce the influence of amplitude-distribution differences in multi-aspect CSAR sub-aperture images on the selection of projection points for the target, we convert absolute amplitudes in CSAR sub-aperture images to relative amplitudes according to Equation (18), thereby obtaining the more accurate projected image for targets. The projection points of targets in the obtained projected image I b x , y are used as the input for inverse mapping with target contour constraints:
I a x , y = I i x , y / mean I i x , y ,
where I a x , y is the CSAR sub-aperture image with relative amplitudes, and mean [ · ] represents the mean function.
Next, the contour points of targets on the imaging plane are extracted from the preprocessed CSAR sub-aperture image. The pixel amplitude corresponding to the elevation scattering point is relatively weak in the CSAR sub-aperture image, while the pixel amplitude corresponding to the scattering point on the imaging plane is relatively strong. Therefore, we binarize the CSAR sub-aperture image via threshold T s to obtain the image I s x , y , which consists of the strong scattering points for targets on the imaging plane. Here, the selection of the threshold T s is related to the scattering coefficient distribution for targets on the imaging plane. The desired contour image, comprising strong scattering points, can be obtained by setting the semi-automatic threshold based on the average pixel value of the CSAR sub-aperture image and the unique weighting coefficient for different types of targets.
The morphological gradient (MG) method extracts the target contour by subtracting the eroded image from the dilated image [30]. The computational complexity of the MG method is about O N s × N x × N y × 2 N m + 1 , where N x × N y is the number of CSAR imaging grid points, and N m is the number of iterations for the dilation operation and the erosion operation. The MG method is an efficient and effective method for extracting the contour image of the conventional vehicle. However, due to the influence of interference points caused by the strong scattering points with height, there are still some non-contour points outside the target contour extracted by the MG method for some complex targets, affecting the extraction accuracy of the target contour. The fuzzy C-means method combining spatial neighborhood information (FCM-SNI) employs the object function with neighborhood constraints to determine the membership of the strong scattering points after setting the cluster center [31]. The computational complexity of FCM-SNI is about O N s × N p × N c × T max , where N p is the number of strong scattering points in the CSAR sub-aperture image, N c is the number of cluster centers, and T max is the maximum number of iterations. The computational complexity results indicate that FCM-SNI has a comparable computational efficiency with the MG method. Therefore, FCM-SNI is used to eliminate the non-contour points outside the target from the strong scattering image.
The cluster center of the strong scattering image is initially determined according to the geometric center of the target and its number. Subsequently, the membership u c j of the j t h projection point in the strong scattering image to the c t h cluster center can be solved by minimizing the objective function
J a = j = 1 N p c = 1 N c u c j d x j x c 2 + y j y c 2 + S j N w w = 1 N w x w x c 2 + y w y c 2   ,
where N w is the number of strong scattering points in the neighborhood, and x w , y w is the coordinate of the w t h strong scattering point in the neighborhood. d is the fuzzy weighting coefficient, and s j is the neighborhood constraint operator of the j t h strong scattering point. The expression of the neighborhood constraint operator is
s j = w = 1 N w 1 + x j x w 2 + y j y w 2 1 .
Then, the membership update equation of the j t h strong scattering point is
u c j = x j x c 2 + y j y c 2 + s j x ¯ j x c 2 + y ¯ j y c 2 1 d 1 c = 1 N c x j x c 2 + y j y c 2 + s j x ¯ j x c 2 + y ¯ j y c 2 1 d 1   ,
where x ¯ j and y ¯ j are the coordinate average of projection points located in the neighborhood of the j t h strong scattering point. The contour points of the target corresponding to the c t h cluster center can be obtained by judging whether strong scattering points belong to the c t h cluster center according to their membership degree.
After eliminating the non-contour points in the strong scattering image I s x , y by using the acquired membership, the contour image I c x , y of the target in the imaging plane can be obtained. According to the distribution of target contour points in the contour image I c x , y , the coordinate ranges of inverse mapping in the X-axis and the Y-axis can be calculated as x e s , x e b and y e s , y e b , respectively. Moreover, we calculate the height coordinate of inverse mapping based on the distance from the projection point in the image I b x , y to the contour point in the image I c x , y . The distance R m between the projection point and the contour point with the largest amplitude in the direction of the aspect angle θ i c is
R m = x x i c m 2 + y y i c m 2 ,
where x , y is the 2D coordinate of the projection point, and x i c m , y i c m is the 2D coordinate of the corresponding contour point.
By substituting the distance in Equation (22) into the geometric transformation relationship in Equation (4), the maximum height coordinate for inverse mapping of the target projection point can be calculated as
z m = R m · tan β .
Then, we perform inverse mapping on the projection points of the target in the image I b x , y under the constraints of the inverse mapping range and height to obtain the scattering candidates within the target contour. The scattering probability at each 3D grid point after the inverse mapping with target contour constraint can be calculated as
P x p , y p , z p = n = 1 N s w n · I b , n x p , y p , z p / n = 1 N s w n ,
where x p , y p , z p is the 3D coordinate of the scattering candidate in the 3D point cloud grid, x p x e s , x e b , y p y e s , y e b , and z p 0 , z m . w n 0 , 1 is the weighted coefficient related to the quality of the n t h projected image, and I b , n x p , y p , z p represents the value of the scattering candidate corresponding to the projection point in the n t h projected image after inverse mapping with target contour constraints. When the coordinate of the scattering candidate satisfies the contour constraints, the value of the scattering candidate is assigned to one; otherwise, it is assigned to zero.
Figure 5 shows the inverse mapping diagram of the projection points in two aspect angles under the constraints of the target contour. In Figure 5, the blue and green solid points represent the projection points in two aspect angles, respectively. The corresponding scattering candidates are located on the blue and green inverse mapping curves. The blue and green dotted circles represent false points located within the target contour, which will not affect the quality of 3D point clouds, while the red and yellow solid points indicate the scattering candidates located on the target surface. The 3D point cloud reconstructed by the IMV method will include black solid points, which are false points outside the target surface, while the inverse mapping method with target contour constraints can eliminate the above false points. All scattering candidates of the target can be obtained through the implementation of inverse mapping with target contour constraints on multi-aspect CSAR sub-aperture images.
Finally, we calculate the scattering probabilities for the obtained scattering candidates and then select the actual scattering points of the target by comparing the calculated scattering probabilities in multi-aspect angles. The scattering candidate with the highest scattering probability among the scattering candidates, which is generated by the projection points of targets in a given azimuth, is selected as the actual scattering point. Note that if the maximum scattering probability of projection points for a given target is less than the threshold for selecting the actual scattering point, all of the scattering candidates are deemed to be false. The actual scattering points of the target can be identified by conducting the above scattering probability selection on all scattering candidates in multi-aspect angles. The elevation coordinates of the actual scattering points are assigned to the corresponding 3D grid points to realize high-quality reconstruction of 3D point clouds for targets.
The processing steps for reconstructing the 3D point cloud of the observation scene with multiple targets are as follows. First, the pixel areas of multiple targets are divided from multi-aspect CSAR sub-aperture images. Second, the threshold coefficients for different types of targets are set to extract their accurate contour image. Then, the above method based on inverse mapping with target contour constraints is applied to reconstructing the 3D point cloud of each target. Finally, the obtained 3D point clouds of all targets are placed in the corresponding positions of the 3D point cloud grid of the entire observation scene.

3.3. Computational Complexity Analysis

The computational complexity of the proposed method is the sum of computational costs of contour extraction and inverse mapping. The computational complexity order of the contour extraction is O N s × N p × N c × T c e , where T c e is the number of iterations. The computational complexity order of the inverse mapping is O N s × N i × N z × T i m , where N i is the number of the projected points, N z is the elevation points of the 3D point cloud grid, and T i m is the operation time of inverse mapping for each projected point. Therefore, the computational complexity order of the proposed method is O N s × N p N c T c e + N i N z T i m . The computational complexity orders of the VAS method and the IMV method are O N s × N x × N y × N z × T v a s and O N s × N i × N z × T i m , respectively, where N x × N y is the number of CSAR imaging grid points, and T v a s is the operation time of the sculpturing for each voxel point. Due to the number of contour points and projected points being far fewer than the pixel number of the CSAR image, the computational complexity of the proposed method is much lower than that of the VAS method. In addition, the contour points are often a small part of the projected points. In this case, the computational complexity of the proposed method is about twice that of the IMV method.

4. Experiments and Results

In this section, the Gotcha volumetric SAR data set collected by the Air Force Research Laboratory (AFRL) and the X-band CSAR measured data collected by the Aviation Industry Corporation of China (AVIC) Lei Hua Electronic Technology Research Institute were used to verify the effectiveness of the proposed method for reconstructing the 3D point cloud of the target. All experiments in this section are implemented on MATLAB R2018b software.

4.1. 3D Point Cloud Reconstruction Results of Vehicles

To verify the performance of the proposed method, the Gotcha volumetric SAR data set, which was collected by an eight-pass X-band CSAR system, was used to reconstruct the 3D point clouds for vehicle targets. Only single-pass CSAR data were selected in this experiment. The system parameters are listed in Table 1.
The selected observation scene includes several vehicles parked in a parking lot with a size of 36 m × 36 m. Figure 6 shows the optical image of the parking lot, with cars labeled A to F and a forklift labeled C2. We used the BP algorithm to process the CSAR echo with an integral angle of 5° to acquire multi-aspect CSAR sub-aperture images. The theoretical resolution of the obtained sub-aperture image was about 0.324 m in range and 0.247 m in azimuth. Consequently, the pixel interval of multi-aspect images was set to 0.2 m × 0.2 m.
Figure 7 is the 360° incoherently superposed CSAR image for the parking lot. Since the capture of the optical image was not simultaneous with the acquisition of CSAR data, the unlabeled vehicle in the optical image disappeared in the CSAR images. The CSAR image in Figure 7a was formed by the incoherent superposition of multi-aspect 5° sub-aperture images, while the CSAR image in Figure 7b was formed by incoherently superposing sparse components of multi-aspect 10° sub-aperture images after LRSD. As shown in Figure 7a, the amplitude of vehicle targets in the CSAR image, obtained through the incoherent superposition of multi-aspect sub-aperture images, was comparable to that of the lawn background, which may potentially impact the quality of 3D point clouds for vehicle targets. On the other hand, the CSAR image shown in Figure 7b was formed by incoherently superposing sparse components on multi-aspect sub-aperture images. The amplitude of the lawn background was significantly smaller than that of the vehicle target, which reduced the influence of background points on the reconstruction quality of 3D point clouds for vehicle targets.
Figure 8 shows the binary images of vehicle targets in the parking lot. In Figure 8a, the projected image of vehicle targets includes the projection points of all scattering points on the imaging plane. The threshold for acquiring the projected image in Figure 8a was 0.3. By performing the inverse mapping on the selected projection points, a denser 3D point cloud could be obtained. Figure 8b shows the contour image of vehicle targets in the imaging plane extracted by the MG method. Its contour extraction time was about 0.035 s. Figure 8c shows the contour image of vehicle targets in the imaging plane extracted by FCM-SNI. Its contour extraction time was about 0.045 s. The threshold coefficients for extracting the contour images of cars and forklifts were 1.8 and 2.7, respectively. It can be seen from Figure 8b,c that FCM-SNI could extract more accurate contour points for vehicle targets with the time cost slightly longer than that of the MG method. The range and height of inverse mapping for projection points in Figure 8a could be calculated through the distribution of contour points in the imaging plane. The proposed method allows for the elimination of false points outside the target formed by inverse mapping from projection points in Figure 8a to 3D space, given the constraints of the inverse mapping range and height.
Figure 9 shows the 3D point cloud results of vehicle targets in the parking lot. According to the pixel interval of the CSAR sub-aperture image and the data acquisition geometry, the height interval of the 3D grid was set to 0.2 m. According to the parking lot scene, the size of the 3D grid was set to 36 m × 36 m × 5 m. Figure 9a–c show the 3D point cloud results of vehicle targets reconstructed based on 360° sub-aperture data. Figure 9d–f show the 3D point cloud results of vehicle targets reconstructed under the conditions of lacking partial sub-aperture data. In this case, only 180° discontinuous sub-aperture data collected at multiple aspects were processed. The 3D point clouds of vehicle targets depicted in Figure 9a,d were both obtained by the VAS method in [23]. The 3D point clouds of vehicle targets depicted in Figure 9b,e were both obtained by the IMV method in [25]. Due to the large number of scattering candidates generated by inverse mapping and voting, the threshold of the scattering probability was set to 0.45. The 3D point clouds of vehicle targets depicted in Figure 9c,f were both obtained by the proposed method. Due to the small number of scattering candidates generated by the proposed method, the threshold of the scattering probability was set to 0.35.
As shown in Figure 9a–c, the quality of 3D point clouds reconstructed by the proposed method was better than that of the VAS method and the IMV method. It can be seen from Figure 9d,e that the quality of 3D point clouds reconstructed by the VAS method and the IMV method was significantly deteriorated under the condition of lacking partial sub-aperture data. We can see from Figure 9f that although some 3D scattering points of vehicle targets at partial aspects were lost, the quality of 3D point clouds reconstructed by the proposed method was still comparable to that reconstructed based on all 360° sub-aperture data. The 3D point clouds presented in Figure 9 demonstrate the effectiveness of reconstructing the 3D point clouds for vehicle targets via the proposed method either in the case of all sub-aperture data or lacking partial sub-aperture data.
To further verify the performance of reconstructing 3D point clouds for vehicle targets via the proposed method, we analyzed the 3D point cloud results of different types of vehicle targets. Figure 10 depicts the optical images of vehicles D, F, and C2. The corresponding CSAR sub-aperture images were formed with a size of 8 m × 6 m. The 3D point cloud reconstruction results of vehicles D, F, and C2 are shown in Figure 11, Figure 12 and Figure 13, respectively. The size of the 3D point cloud grid was 8 m × 6 m × 2.2 m. The 3D point cloud results in Figure 11a–c, Figure 12a–c, and Figure 13a–c were obtained by the VAS method, the IMV method, and the proposed method based on all 360° sub-aperture data, respectively. The 3D point cloud results in Figure 11d–f, Figure 12d–f, and Figure 13d–f were obtained by the VAS method, the IMV method, and the proposed method in the case of lacking partial sub-aperture data, respectively.
We can see from Figure 11, Figure 12 and Figure 13 that the 3D point clouds of vehicles D, F, and C2 reconstructed by the proposed method were more accurate than that of the VAS method and the IMV method. It can be seen from Figure 11, Figure 12 and Figure 13e that the 3D point clouds reconstructed by the IMV method were significantly deteriorated. As shown in Figure 11, Figure 12 and Figure 13d, the scattered points in the partial aspects of the 3D point clouds reconstructed by the VAS method were not recovered. The 3D point clouds depicted in Figure 11, Figure 12 and Figure 13f indicated that the quality of 3D point clouds reconstructed by the proposed method was also credible in the case of lacking partial sub-aperture data. It is worth noting that the strong scattering points located between the body and rack in the 3D point clouds of vehicle C2 reconstructed by the proposed method were caused by the multiple reflections between them.
Table 2 and Table 3 compare the actual sizes of the selected vehicles with their estimated sizes, as restored through the above three methods. The quantitative evaluation bar charts of 3D point cloud reconstruction results of vehicle targets are depicted in Figure 14. As shown in Table 2 and Figure 14a–c, the size values of the 3D point cloud restored via the proposed method were closer to the actual value than those of the VAS method and the IMV method for different types of vehicle targets. It can be seen from Table 3 and Figure 14d–f that the size values of the 3D point cloud restored by the proposed method were also more robust than those of the VAS method and the IMV method in the case of lacking partial sub-aperture data. Since the fork height of vehicle C2 was shorter than an altitude resolution unit, only its body part was reconstructed in the 3D point clouds of vehicle C2, resulting in the restored length being significantly less than the actual length. Moreover, the time costs of the VAS method, the IMV method, and the proposed method in cases of all sub-aperture data and lacking partial sub-aperture data are compared in Table 4. The results demonstrate that the time cost of the proposed method was less than that of the VAS method and was slightly longer than that of the IMV method.

4.2. The 3D Point Cloud Reconstruction Results of Buildings

To examine the universality of the proposed method, the CSAR measured data collected by the single-pass airborne CSAR system were employed to reconstruct the 3D point clouds for building targets. The system parameters are listed in Table 5.
In this experiment, the observation scene was the Ziyu Guandi community located in Weinan, Shaanxi Province, China. The selected scene size was 360 m × 360 m. Figure 15 shows the optical image of the selected community, which included multiple building targets. Under the condition of the 5° sub-aperture integral angle, the theoretical resolution of multi-aspect CSAR images acquired via the BP algorithm was about 0.316 m in range and 0.201 m in azimuth. Therefore, the pixel interval of the CSAR image was set to 0.2 m × 0.2 m.
After performing the LRSD on a series of 5° sub-aperture images with registration, the sparse components of multi-aspect 10° sub-aperture images could be obtained. Figure 16a shows the CSAR image of the selected community, which was formed by incoherently superposing sparse components of multi-aspect 10° sub-aperture images. Figure 16b is the projected image of building targets in the imaging plane. The threshold for acquiring the projected image of building targets in Figure 16b was 0.25. Figure 16c,d are the contour images of building targets extracted by the MG method and FCM-SNI, respectively, whose contour extraction times were 2.59 s and 2.26 s, respectively. The threshold coefficients for extracting the contour image of larger buildings and smaller buildings were 3.3 and 2.1, respectively. It can be seen from Figure 16c,d that FCM-SNI could extract more accurate contour points for building targets than that of the MG method.
Figure 17 and Figure 18 show the reconstruction results of 3D point clouds for building targets in cases of all 360° sub-aperture data and lacking partial sub-aperture data, respectively. According to the pixel interval of the CSAR sub-aperture image and the data acquisition geometry, the elevation interval of the 3D grid was set to 0.6 m. Furthermore, in consideration of the selected building scene, the size of the 3D point cloud grid was set to 360 m × 360 m × 81 m. The 3D point clouds of building targets in Figure 17a and Figure 18a were reconstructed by the VAS method in [23]. The 3D point clouds of building targets in Figure 17b and Figure 18b were reconstructed by the IMV method in [25]. The 3D point clouds of building targets in Figure 17c and Figure 18c were reconstructed by the proposed method. The scattering probability threshold of these methods was set to 0.33. In addition, some different types of building targets marked with colored boxes in Figure 17 and Figure 18 were selected to show more detailed visual comparison.
Due to the influence of non-target background points in the vicinity of buildings, the phenomenon of several adjacent buildings being merged into a single building occurred in the 3D point clouds reconstructed by the VAS method and the IMV method as shown in Figure 17a,b. Figure 17c demonstrates that the proposed method can accurately reconstruct the 3D point clouds of multiple buildings in the selected community by eliminating false points around buildings. Figure 17d–f show the top view of the 3D point clouds for building targets in the selected community. As illustrated in Figure 17, the proposed method can effectively enhance the reconstruction quality of 3D point clouds for building targets under the condition of using all 360° sub-aperture data.
As shown in Figure 18a,b, there were more false points occurring in the 3D point clouds of building targets reconstructed by the VAS method and the IMV method in the case of lacking partial sub-aperture data, resulting in the deteriorated quality of the 3D point clouds for building targets. It can be seen from Figure 18c that the proposed method could also effectively eliminate false points in the 3D point clouds of building targets. Figure 18d–f show the top view of the 3D point clouds for building targets in the selected community. As illustrated in Figure 18, the proposed method could also improve the reconstruction quality of 3D point clouds for building targets under the condition of only using 180° sub-aperture data.
To further verify the performance of reconstructing 3D point clouds for building targets via the proposed method, we analyzed the 3D point cloud results of different types of building targets, which are marked in Figure 15 with a red oval, a green circle, and a yellow box. Figure 19 depicts the optical images of the selected building, donut-shaped corridor, and hotel lobby. Their CSAR sub-aperture images were formed with sizes of 120 m × 80 m, 40 m × 40 m, and 70 m × 100 m. The 3D point cloud reconstruction results of the selected building, the donut-shaped corridor, and the hotel lobby are shown in Figure 20, Figure 21 and Figure 22, respectively. The height of the 3D point cloud grid for the selected building was 81 m. The heights of the 3D point cloud grid for the donut-shaped corridor and the hotel lobby were both 15 m. The 3D point cloud results of the selected building targets in Figure 20a–c, Figure 21a–c, and Figure 22a–c were obtained by the VAS method, the IMV method, and the proposed method in the case of all 360° sub-aperture data, respectively. The 3D point cloud results in Figure 20d–f, Figure 21g–i, and Figure 22g–i were obtained by the VAS method, the IMV method, and the proposed method in the case of lacking partial sub-aperture data, respectively.
As illustrated in Figure 20a,b, the 3D point clouds of the selected building reconstructed by the VAS method and the IMV method contained some false points, resulting in the shape of the obtained 3D point clouds being larger than the actual size of the selected building. In contrast, as illustrated in Figure 20c, the shape of the 3D point cloud reconstructed by the proposed method was closer to the actual size of the selected building. The 3D point clouds of the selected building reconstructed based on 180° sub-aperture data are shown in Figure 20d–f. It can be seen from Figure 20d,e that the number of false points in the 3D point clouds reconstructed by the VAS method and the IMV method were increased in the case of lacking partial sub-aperture data. Although a few false points occurred in the 3D point cloud reconstructed by the proposed method, the slight deterioration in 3D point cloud quality was acceptable compared with the VAS method and the IMV method. The 3D point cloud results depicted in Figure 20 indicated that the proposed method could improve the 3D point cloud quality of the selected building either in the case of all sub-aperture data or lacking partial sub-aperture data.
It can be seen from Figure 21a,b that the 3D point clouds of the selected corridor reconstructed by the VAS method and the IMV method were disturbed to varying degrees by the scattering points of other buildings. In addition, the open gap of the donut-shaped corridor was also covered by its nearby interference points. In contrast, as shown in Figure 21c, the 3D point cloud reconstructed by the proposed method could accurately recover the shape of the selected corridor. The 3D point clouds of the selected corridor reconstructed based on 180° sub-aperture data are shown in Figure 21g–i. We can see from Figure 21g,h that the quality of the 3D point clouds of the selected corridor reconstructed by the VAS method and the IMV method was deteriorated because of the large number of false points around the corridor. Although some false points also appeared in the 3D point cloud reconstructed by the proposed method, the open gap of the donut-shaped corridor was still recognizable. The 3D point cloud results shown in Figure 21 further indicated that the proposed method could obtain the high-quality 3D point cloud of the selected corridor either in the case of all sub-aperture data or lacking partial sub-aperture data.
We can see from Figure 22a,b, the 3D point clouds of the selected hotel lobby reconstructed by the VAS method and the IMV method contain many false points caused by the surrounding buildings, fountains and trees, seriously affecting the shape recovery of the selected hotel lobby. In contrast, as illustrated in Figure 22c, the 3D point cloud reconstructed by the proposed method could describe the shape of the selected hotel lobby without interference from other targets. In particular, the columns at the entrance of the hotel lobby could be restored. The 3D point clouds of the selected hotel lobby reconstructed based on 180° sub-aperture data are shown in Figure 22g–i. It can be seen from Figure 22g,h that the 3D point clouds of the hotel lobby reconstructed by the VAS method and the IMV method were mixed with the false points of other targets in the case of lacking partial sub-aperture data. Although the 3D point cloud of the hotel lobby reconstructed by the proposed method was also affected by the false points of other targets, the shape of the selected hotel lobby could be approximately recovered. The 3D point cloud results depicted in Figure 22 indicate that the 3D reconstruction performance of the proposed method could apply to different types of building targets either in the case of all sub-aperture data or lacking partial sub-aperture data.
Table 6 and Table 7 compare the actual sizes of the selected building targets with their estimated sizes obtained through the above three methods. The quantitative evaluation bar charts of 3D point cloud reconstruction results of building targets are depicted in Figure 23. We can see from Table 6 and Figure 23a–c that the length and width errors between the actual sizes of the selected building targets and the sizes of the 3D point clouds reconstructed via the proposed method were less than 3 m, while the height errors were less than 1 m, which verified the performance of reconstructing 3D point clouds for building targets by the proposed method. It can be seen from Table 7 and Figure 23d–f that the sizes of the 3D point cloud of building targets restored by the proposed method were also closer to their actual sizes than that of the VAS method and the IMV method in the case of lacking partial sub-aperture data. Moreover, the time costs of reconstructing the 3D point clouds of building targets by the VAS method, the IMV method, and the proposed method in cases of all sub-aperture data and lacking partial sub-aperture data are compared in Table 8. The results demonstrate that the proposed method could reconstruct the high-quality 3D point clouds of building targets with a time cost far less than that of the VAS method and slightly longer than that of the IMV method.

5. Conclusions

In this paper, a novel 3D point cloud reconstruction method for single-pass CSAR based on inverse mapping with target contour constraints is proposed. Following the extraction of both projection points and the contour points of the target on the imaging plane from multi-aspect CSAR sub-aperture images, the proposed method eliminates false points in the 3D point cloud reconstruction result through the implementation of the inverse mapping on the extracted projection points, under the constraints of the inverse mapping range and height calculated by the extracted contour points. Furthermore, the actual scattering points of the target can be accurately reconstructed by comparing the scattering probabilities of the scattering candidates in multi-aspect angles. The experimental results of X-band airborne CSAR data demonstrated that in comparison with the existing inverse mapping and voting method, the proposed method was capable of obtaining higher-quality 3D point clouds for vehicle targets and building targets. In the future, the scattering property of the target will be incorporated to further enhance the reconstruction quality of the 3D point cloud for targets whose scattering information is lost in some aspects, which may contribute to the realization of high-quality 3D point cloud reconstruction for complex scenes.

Author Contributions

Conceptualization, Q.Z. and J.S.; methodology, Q.Z. and F.T.; software, Q.Z.; validation, Q.Z., J.S. and F.T.; formal analysis, J.S.; investigation, Q.Z.; resources, J.S. and W.H.; data curation, J.S. and Y.L.; writing—original draft preparation, Q.Z.; writing—review and editing, J.S. and Y.L.; visualization, Q.Z. and F.T.; supervision, Y.L.; project administration, J.S. and W.H.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, Grant 62131001 and Grant 62171029.

Data Availability Statement

The Gotcha volumetric SAR data set is available at https://www.sdms.afrl.af.mil. The CSAR image data supporting the conclusions of this article are available on request from the corresponding author. The raw data are not publicly available due to the confidentiality clause of some projects.

Acknowledgments

The authors would like to thank the Air Force Research Laboratory for providing the Gotcha volumetric SAR data set to the community.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Soumekh, M. Reconnaissance with Slant Plane Circular SAR Imaging. IEEE Trans. Image Process. 1996, 5, 1252–1265. [Google Scholar] [PubMed]
  2. Lee-Elkin, F. Autofocus for 3D Imaging. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XV, Orlando, FL, USA, 17–18 March 2008. [Google Scholar]
  3. Lee-Elkin, F.; Potter, L. An Algorithm for Wide Aperture 3D SAR Imaging with Measured Data. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XVIII, Orlando, FL, USA, 27–28 April 2011. [Google Scholar]
  4. Moore, L.; Potter, L.; Ash, J. Three-Dimensional Position Accuracy in Circular Synthetic Aperture Radar. IEEE Aerosp. Electron. Syst. Mag. 2014, 29, 29–40. [Google Scholar]
  5. Ponce, O.; Prats-Iraola, P.; Pinheiro, M.; Rodriguez-Cassola, M.; Scheiber, R.; Reigber, A.; Moreira, A. Fully Polarimetric High-Resolution 3-D Imaging with Circular SAR at L-Band. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3074–3090. [Google Scholar]
  6. Ertin, E.; Austin, C.; Sharma, S.; Moses, R.; Potter, L. GOTCHA Experience Report: Three-Dimensional SAR Imaging with Complete Circular Apertures. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XIV, Orlando, FL, USA, 10–11 April 2007. [Google Scholar]
  7. Ponce, O.; Prats-Iraola, P.; Scheiber, R.; Reigber, A.; Moreira, A. First Airborne Demonstration of Holographic SAR Tomography with Fully Polarimetric Multicircular Acquisitions at L-Band. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6170–6196. [Google Scholar]
  8. Palm, S.; Oriot, H.M.; Cantalloube, H.M. Radargrammetric DEM Extraction over Urban Area Using Circular SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4720–4725. [Google Scholar]
  9. Lin, Y.; Hong, W.; Tan, W.; Wang, Y.; Wu, Y. Interferometric Circular SAR Method for Three-Dimensional Imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 1026–1030. [Google Scholar]
  10. Feng, D.; An, D.; Huang, X.; Li, Y. A Phase Calibration Method Based on Phase Gradient Autofocus for Airborne Holographic SAR Imaging. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1864–1868. [Google Scholar]
  11. Feng, D.; An, D.; Chen, L.; Huang, X. Holographic SAR Tomography 3-D Reconstruction Based on Iterative Adaptive Approach and Generalized Likelihood Ratio Test. IEEE Trans. Geosci. Remote Sens. 2021, 59, 305–315. [Google Scholar]
  12. Sotirelis, P.; Gilmore, S. 3D SAR Image Reconstruction of Ground Vehicles Using Sparse Multiple Flight Path Data. In Proceedings of the Algorithms for Synthetic Aperture Radar Imagery XXX, Orlando, FL, USA, 2–3 May 2023. [Google Scholar]
  13. Wu, K.; Shen, Q.; Cui, W. 3-D Tomographic Circular SAR Imaging of Targets Using Scattering Phase Correction. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5221914. [Google Scholar] [CrossRef]
  14. Li, Z.; Zhang, F.; Wan, Y.; Chen, L.; Wang, D.; Yang, L. Airborne Circular Flight Array SAR 3-D Imaging Algorithm of Buildings Based on Layered Phase Compensation in the Wavenumber Domain. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5213512. [Google Scholar]
  15. Li, J.; An, D.; Song, Y.; Xu, J.; Shen, L.; Zhou, Z. Estimation of Residual Motion Errors and Phase Ambiguity for Repeat-Pass In-CSAR without External DEMs. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5214915. [Google Scholar] [CrossRef]
  16. Zhang, H.; Lin, Y.; Teng, F.; Feng, S.; Hong, W. Holographic SAR Volumetric Imaging Strategy for 3-D Imaging with Single-Pass Circular InSAR Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5219816. [Google Scholar] [CrossRef]
  17. Palm, S.; Stilla, U. 3-D Point Cloud Generation from Airborne Single-Pass and Single-Channel Circular SAR Data. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8398–8417. [Google Scholar] [CrossRef]
  18. Dungan, K.E.; Potter, L.C. 3-D Imaging of Vehicles Using Wide Aperture Radar. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 187–200. [Google Scholar] [CrossRef]
  19. Dickey, F.M.; Doerry, A.W. Recovering Shape from Shadows in Synthetic Aperture Radar Imagery. In Proceedings of the Radar Sensor Technology XII, Orlando, FL, USA, 18–19 March 2008. [Google Scholar]
  20. Zhang, H.; Lin, Y.; Teng, F.; Hong, W. A Probabilistic Approach for Stereo 3D Point Cloud Reconstruction from Airborne Single-Channel Multi-Aspect SAR Image Sequences. Remote Sens. 2022, 14, 5715. [Google Scholar] [CrossRef]
  21. Luo, Y.; Deng, Y.; Xiang, W.; Zhang, H.; Yang, C.; Wang, L. Radargrammetric 3D Imaging through Composite Registration Method Using Multi-Aspect Synthetic Aperture Radar Imagery. Remote Sens. 2024, 16, 523. [Google Scholar] [CrossRef]
  22. Chen, L.; An, D.; Huang, X.; Zhou, Z. A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data. IEEE Trans. Image Process. 2017, 26, 5545–5554. [Google Scholar]
  23. Zhou, C.; Zhou, Y.; Suo, Z.; Li, Z. Voxel Area Sculpturing-Based 3D Scene Reconstruction from Single-Pass CSAR Data. Electron. Lett. 2020, 56, 566–567. [Google Scholar] [CrossRef]
  24. Zhang, H.; Lin, Y.; Teng, F.; Feng, S.; Yang, B.; Hong, W. Circular SAR Incoherent 3D Imaging with A Nerf-Inspired Method. Remote Sens. 2023, 15, 3322. [Google Scholar] [CrossRef]
  25. Feng, S.; Lin, Y.; Wang, Y.; Teng, F.; Hong, W. 3D Point Cloud Reconstruction Using Inversely Mapping and Voting from Single Pass CSAR Images. Remote Sens. 2021, 13, 3534. [Google Scholar] [CrossRef]
  26. Li, Y.; Chen, L.; An, D.; Huang, X.; Feng, D. A Novel Method for Extracting Geometric Parameter Information of Buildings Based on CSAR Images. Int. J. Remote Sens. 2022, 43, 4117–4133. [Google Scholar] [CrossRef]
  27. Zhao, J.; An, D.; Chen, L. A Novel Method for Building Contour Extraction Based on CSAR Images. Remote Sens. 2023, 15, 3463. [Google Scholar] [CrossRef]
  28. Li, J.; Hu, Q.; Ai, M. RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform. IEEE Trans. Image Process. 2020, 29, 3296–3310. [Google Scholar] [CrossRef] [PubMed]
  29. Zhou, T.; Tao, D. GoDec: Randomized Low-Rank & Sparse Matrix Decomposition in Noisy Case. In Proceedings of the 28th ICML, Bellevue, WA, USA, 28 June–2 July 2011. [Google Scholar]
  30. Rivest, J.F.; Soille, P.; Beucher, S. Morphological gradients. J. Electron. Imaging 1993, 2, 326–336. [Google Scholar]
  31. Krinidis, S.; Chatzis, V. A robust fuzzy local information C-means clustering algorithm. IEEE Trans. Image Process. 2010, 19, 1328–1337. [Google Scholar] [CrossRef]
Figure 1. CSAR imaging geometry. The red and orange bullets represent point targets.
Figure 1. CSAR imaging geometry. The red and orange bullets represent point targets.
Remotesensing 17 01275 g001
Figure 2. Pixel distribution of the height target in the CSAR image. The red bullet is the point target with height, the blue and green bullets are the projection points on the ground plane.
Figure 2. Pixel distribution of the height target in the CSAR image. The red bullet is the point target with height, the blue and green bullets are the projection points on the ground plane.
Remotesensing 17 01275 g002
Figure 3. Inverse mapping diagram from the projection points in multi-aspect CSAR sub-aperture images to the 3D space. The blue and green bullets represent the inverse mapping points in the 3D space, and the blue and green lines are the inverse mapping paths in different aspects.
Figure 3. Inverse mapping diagram from the projection points in multi-aspect CSAR sub-aperture images to the 3D space. The blue and green bullets represent the inverse mapping points in the 3D space, and the blue and green lines are the inverse mapping paths in different aspects.
Remotesensing 17 01275 g003
Figure 4. The flowchart of the 3D point cloud reconstruction method based on inverse mapping with target contour constraints.
Figure 4. The flowchart of the 3D point cloud reconstruction method based on inverse mapping with target contour constraints.
Remotesensing 17 01275 g004
Figure 5. The diagram of inverse mapping with target contour constraints. The black arrows represent the orientation of inverse mapping.
Figure 5. The diagram of inverse mapping with target contour constraints. The black arrows represent the orientation of inverse mapping.
Remotesensing 17 01275 g005
Figure 6. The optical image of the parking lot.
Figure 6. The optical image of the parking lot.
Remotesensing 17 01275 g006
Figure 7. The 360° incoherently superposed CSAR images of the parking lot. (a) Incoherent superposition results of original CSAR sub-aperture images. (b) Incoherent superposition results of sparse components in CSAR sub-aperture images.
Figure 7. The 360° incoherently superposed CSAR images of the parking lot. (a) Incoherent superposition results of original CSAR sub-aperture images. (b) Incoherent superposition results of sparse components in CSAR sub-aperture images.
Remotesensing 17 01275 g007
Figure 8. The binary images of vehicle targets in the parking lot. (a) The projected image in the imaging plane. (b) The contour image in the imaging plane extracted by the MG method. (c) The contour image in the imaging plane extracted by FCM-SNI.
Figure 8. The binary images of vehicle targets in the parking lot. (a) The projected image in the imaging plane. (b) The contour image in the imaging plane extracted by the MG method. (c) The contour image in the imaging plane extracted by FCM-SNI.
Remotesensing 17 01275 g008
Figure 9. The 3D point cloud reconstruction results of vehicle targets in the parking lot. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Figure 9. The 3D point cloud reconstruction results of vehicle targets in the parking lot. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Remotesensing 17 01275 g009
Figure 10. The optical images of vehicles D, F, and C2. (a) Vehicle D. (b) Vehicle F. (c) Vehicle C2.
Figure 10. The optical images of vehicles D, F, and C2. (a) Vehicle D. (b) Vehicle F. (c) Vehicle C2.
Remotesensing 17 01275 g010
Figure 11. The 3D point cloud reconstruction results of vehicle D. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Figure 11. The 3D point cloud reconstruction results of vehicle D. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Remotesensing 17 01275 g011
Figure 12. The 3D point cloud reconstruction results of vehicle F. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Figure 12. The 3D point cloud reconstruction results of vehicle F. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Remotesensing 17 01275 g012
Figure 13. The 3D point cloud reconstruction results of vehicle C2. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Figure 13. The 3D point cloud reconstruction results of vehicle C2. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Remotesensing 17 01275 g013
Figure 14. The quantitative evaluation bar charts of 3D point cloud reconstruction results of vehicle targets. (a) The reconstruction length of 3D point clouds in the case of 360° sub-aperture data. (b) The reconstruction width of 3D point clouds in the case of 360° sub-aperture data. (c) The reconstruction height of 3D point clouds in the case of 360° sub-aperture data. (d) The reconstruction length of 3D point clouds in the case of 180° sub-aperture data. (e) The reconstruction width of 3D point clouds in the case of 180° sub-aperture data. (f) The reconstruction height of 3D point clouds in the case of 180° sub-aperture data.
Figure 14. The quantitative evaluation bar charts of 3D point cloud reconstruction results of vehicle targets. (a) The reconstruction length of 3D point clouds in the case of 360° sub-aperture data. (b) The reconstruction width of 3D point clouds in the case of 360° sub-aperture data. (c) The reconstruction height of 3D point clouds in the case of 360° sub-aperture data. (d) The reconstruction length of 3D point clouds in the case of 180° sub-aperture data. (e) The reconstruction width of 3D point clouds in the case of 180° sub-aperture data. (f) The reconstruction height of 3D point clouds in the case of 180° sub-aperture data.
Remotesensing 17 01275 g014
Figure 15. The optical image of the Ziyu Guandi community. The selected building, corridor and hotel lobby are respectively marked with a red ellipse, a green circle and a yellow box.
Figure 15. The optical image of the Ziyu Guandi community. The selected building, corridor and hotel lobby are respectively marked with a red ellipse, a green circle and a yellow box.
Remotesensing 17 01275 g015
Figure 16. The CSAR image and binary images of building targets in the Ziyu Guandi community. (a) Incoherent superposition results of sparse components of all sub-aperture CSAR images. (b) The projected image of building targets in the imaging plane. (c) The contour image of building targets in the imaging plane extracted by the MG method. (d) The contour image of building targets extracted by FCM-SNI.
Figure 16. The CSAR image and binary images of building targets in the Ziyu Guandi community. (a) Incoherent superposition results of sparse components of all sub-aperture CSAR images. (b) The projected image of building targets in the imaging plane. (c) The contour image of building targets in the imaging plane extracted by the MG method. (d) The contour image of building targets extracted by FCM-SNI.
Remotesensing 17 01275 g016
Figure 17. The 3D point cloud reconstruction results of building targets in the Ziyu Guandi community based on all 360° sub-aperture data. (a) The 3D point clouds reconstructed by the VAS method. (b) The 3D point clouds reconstructed by the IMV method. (c) The 3D point clouds reconstructed by the proposed method. (d) The top view of 3D point clouds reconstructed by the VAS method. (e) The top view of 3D point clouds reconstructed by the IMV method. (f) The top view of 3D point clouds reconstructed by the proposed method.
Figure 17. The 3D point cloud reconstruction results of building targets in the Ziyu Guandi community based on all 360° sub-aperture data. (a) The 3D point clouds reconstructed by the VAS method. (b) The 3D point clouds reconstructed by the IMV method. (c) The 3D point clouds reconstructed by the proposed method. (d) The top view of 3D point clouds reconstructed by the VAS method. (e) The top view of 3D point clouds reconstructed by the IMV method. (f) The top view of 3D point clouds reconstructed by the proposed method.
Remotesensing 17 01275 g017
Figure 18. The 3D point cloud reconstruction results of building targets in the Ziyu Guandi community based on 180° sub-aperture data. (a) The 3D point clouds reconstructed by the VAS method. (b) The 3D point clouds reconstructed by the IMV method. (c) The 3D point clouds reconstructed by the proposed method. (d) The top view of 3D point clouds reconstructed by the VAS method. (e) The top view of 3D point clouds reconstructed by the IMV method. (f) The top view of 3D point clouds reconstructed by the proposed method.
Figure 18. The 3D point cloud reconstruction results of building targets in the Ziyu Guandi community based on 180° sub-aperture data. (a) The 3D point clouds reconstructed by the VAS method. (b) The 3D point clouds reconstructed by the IMV method. (c) The 3D point clouds reconstructed by the proposed method. (d) The top view of 3D point clouds reconstructed by the VAS method. (e) The top view of 3D point clouds reconstructed by the IMV method. (f) The top view of 3D point clouds reconstructed by the proposed method.
Remotesensing 17 01275 g018
Figure 19. The optical images of the selected building, the donut-shaped corridor, and the hotel lobby. (a) Building. (b) Donut-shaped corridor. (c) Hotel lobby.
Figure 19. The optical images of the selected building, the donut-shaped corridor, and the hotel lobby. (a) Building. (b) Donut-shaped corridor. (c) Hotel lobby.
Remotesensing 17 01275 g019
Figure 20. The 3D point cloud reconstruction results of the selected building. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Figure 20. The 3D point cloud reconstruction results of the selected building. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The VAS method in the case of 180° sub-aperture data. (e) The IMV method in the case of 180° sub-aperture data. (f) The proposed method in the case of 180° sub-aperture data.
Remotesensing 17 01275 g020aRemotesensing 17 01275 g020b
Figure 21. The 3D point cloud reconstruction results of the selected donut-shaped corridor. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The partial zoom-in view of the open gap area in (a). (e) The partial zoom-in view of the open gap area in (b). (f) The partial zoom-in view of the open gap area in (c). (g) The VAS method in the case of 180° sub-aperture data. (h) The IMV method in the case of 180° sub-aperture data. (i) The proposed method in the case of 180° sub-aperture data. (j) The partial zoom-in view of the open gap area in (g). (k) The partial zoom-in view of the open gap area in (h). (l) The partial zoom-in view of the open gap area in (i). The open gap areas of the reconstruction results are marked with black circles.
Figure 21. The 3D point cloud reconstruction results of the selected donut-shaped corridor. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The partial zoom-in view of the open gap area in (a). (e) The partial zoom-in view of the open gap area in (b). (f) The partial zoom-in view of the open gap area in (c). (g) The VAS method in the case of 180° sub-aperture data. (h) The IMV method in the case of 180° sub-aperture data. (i) The proposed method in the case of 180° sub-aperture data. (j) The partial zoom-in view of the open gap area in (g). (k) The partial zoom-in view of the open gap area in (h). (l) The partial zoom-in view of the open gap area in (i). The open gap areas of the reconstruction results are marked with black circles.
Remotesensing 17 01275 g021aRemotesensing 17 01275 g021b
Figure 22. The 3D point cloud reconstruction results of the selected hotel lobby. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The partial zoom-in view of the hotel gate area in (a). (e) The partial zoom-in view of the hotel gate area in (b). (f) The partial zoom-in view of the hotel gate area in (c). (g) The VAS method in the case of 180° sub-aperture data. (h) The IMV method in the case of 180° sub-aperture data. (i) The proposed method in the case of 180° sub-aperture data. (j) The partial zoom-in view of the hotel gate area in (g). (k) The partial zoom-in view of the hotel gate area in (h). (l) The partial zoom-in view of the hotel gate area in (i). The hotel gate areas of the reconstruction results are marked with black circles.
Figure 22. The 3D point cloud reconstruction results of the selected hotel lobby. (a) The VAS method in the case of 360° sub-aperture data. (b) The IMV method in the case of 360° sub-aperture data. (c) The proposed method in the case of 360° sub-aperture data. (d) The partial zoom-in view of the hotel gate area in (a). (e) The partial zoom-in view of the hotel gate area in (b). (f) The partial zoom-in view of the hotel gate area in (c). (g) The VAS method in the case of 180° sub-aperture data. (h) The IMV method in the case of 180° sub-aperture data. (i) The proposed method in the case of 180° sub-aperture data. (j) The partial zoom-in view of the hotel gate area in (g). (k) The partial zoom-in view of the hotel gate area in (h). (l) The partial zoom-in view of the hotel gate area in (i). The hotel gate areas of the reconstruction results are marked with black circles.
Remotesensing 17 01275 g022aRemotesensing 17 01275 g022b
Figure 23. The quantitative evaluation bar charts of 3D point cloud reconstruction results of building targets. (a) The reconstruction length of 3D point clouds in the case of 360° sub-aperture data. (b) The reconstruction width of 3D point clouds in the case of 360° sub-aperture data. (c) The reconstruction height of 3D point clouds in the case of 360° sub-aperture data. (d) The reconstruction length of 3D point clouds in the case of 180° sub-aperture data. (e) The reconstruction width of 3D point clouds in the case of 180° sub-aperture data. (f) The reconstruction height of 3D point clouds in the case of 180° sub-aperture data.
Figure 23. The quantitative evaluation bar charts of 3D point cloud reconstruction results of building targets. (a) The reconstruction length of 3D point clouds in the case of 360° sub-aperture data. (b) The reconstruction width of 3D point clouds in the case of 360° sub-aperture data. (c) The reconstruction height of 3D point clouds in the case of 360° sub-aperture data. (d) The reconstruction length of 3D point clouds in the case of 180° sub-aperture data. (e) The reconstruction width of 3D point clouds in the case of 180° sub-aperture data. (f) The reconstruction height of 3D point clouds in the case of 180° sub-aperture data.
Remotesensing 17 01275 g023
Table 1. System parameters of collecting the Gotcha data set.
Table 1. System parameters of collecting the Gotcha data set.
ParameterValue
Center frequency9.6 GHz
Bandwidth640 MHz
Flight altitude6958 m
Flight radius7294 m
Incident angle43.65°
Table 2. Quantitative evaluation of the 3D point clouds of vehicle targets in the case of all 360° sub-aperture data.
Table 2. Quantitative evaluation of the 3D point clouds of vehicle targets in the case of all 360° sub-aperture data.
Length (m)Width (m)Height (m)
Vehicle DActual value4.791.761.41
Restored value with the VAS method [23]4.401.602.20
Restored value with the IMV method [25]4.601.802.00
Restored value with the proposed method4.801.801.60
Vehicle FActual value4.501.771.67
Restored value with the VAS method4.201.602.20
Restored value with the IMV method4.401.802.20
Restored value with the proposed method4.601.801.80
Vehicle C2Actual value4.311.502.02
Restored value with the VAS method3.402.002.20
Restored value with the IMV method3.402.002.00
Restored value with the proposed method3.601.802.00
Table 3. Quantitative evaluation of the 3D point clouds of vehicle targets in the case of lacking partial sub-aperture data.
Table 3. Quantitative evaluation of the 3D point clouds of vehicle targets in the case of lacking partial sub-aperture data.
Length (m)Width (m)Height (m)
Vehicle DActual value4.791.761.41
Restored value with the VAS method [23]4.201.602.20
Restored value with the IMV method [25]4.401.602.20
Restored value with the proposed method4.801.801.80
Vehicle FActual value4.501.771.67
Restored value with the VAS method4.201.602.20
Restored value with the IMV method4.601.602.20
Restored value with the proposed method4.801.801.80
Vehicle C2Actual value4.311.502.02
Restored value with the VAS method3.402.202.20
Restored value with the IMV method3.402.202.00
Restored value with the proposed method3.601.802.00
Table 4. Computational requirements analysis of the 3D reconstruction of vehicle targets.
Table 4. Computational requirements analysis of the 3D reconstruction of vehicle targets.
The VAS MethodThe IMV MethodThe Proposed Method
All 360° sub-aperture data0.45 s0.06 s0.14 s
180° partial sub-aperture data0.25 s0.04 s0.09 s
Table 5. System parameters of collecting the single-pass CSAR measured data.
Table 5. System parameters of collecting the single-pass CSAR measured data.
ParameterValue
Signal wavebandX
Flight altitude2603 m
Flight radius7892 m
Incident angle18.26°
Table 6. Quantitative evaluation of the 3D point clouds of building targets in the case of all 360° sub-aperture data.
Table 6. Quantitative evaluation of the 3D point clouds of building targets in the case of all 360° sub-aperture data.
Length (m)Width (m)Height (m)
BuildingActual value63.2018.2077.40
Restored value with the VAS method [23]69.3920.9379.17
Restored value with the IMV method [25]71.1121.0979.54
Restored value with the proposed method61.0517.4977.50
CorridorActual value29.6024.8011.60
Restored value with the VAS method38.7230.0812.74
Restored value with the IMV method39.7030.2312.99
Restored value with the proposed method32.4625.9812.46
Hotel LobbyActual value51.4038.6012.40
Restored value with the VAS method61.3641.7714.06
Restored value with the IMV method65.5645.2614.05
Restored value with the proposed method53.4837.2113.12
Table 7. Quantitative evaluation of the 3D point clouds of building targets in the case of lacking partial sub-aperture data.
Table 7. Quantitative evaluation of the 3D point clouds of building targets in the case of lacking partial sub-aperture data.
Length (m)Width (m)Height (m)
BuildingActual value63.2018.2077.40
Restored value with the VAS method [23]75.8622.9079.13
Restored value with the IMV method [25]80.8524.6479.47
Restored value with the proposed method67.1419.7676.84
CorridorActual value29.6024.8011.60
Restored value with the VAS method39.4231.2412.76
Restored value with the IMV method39.6030.9213.03
Restored value with the proposed method35.4827.4512.39
Hotel LobbyActual value51.4038.6012.40
Restored value with the VAS method64.6845.4113.91
Restored value with the IMV method72.0345.8213.88
Restored value with the proposed method59.5739.6112.86
Table 8. Computational requirements analysis of the 3D reconstruction of building targets.
Table 8. Computational requirements analysis of the 3D reconstruction of building targets.
The VAS MethodThe IMV MethodThe Proposed Method
All 360° sub-aperture data3862 s16 s37 s
180° partial sub-aperture data1541 s11 s24 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Q.; Sun, J.; Teng, F.; Lin, Y.; Hong, W. A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints. Remote Sens. 2025, 17, 1275. https://doi.org/10.3390/rs17071275

AMA Style

Zhang Q, Sun J, Teng F, Lin Y, Hong W. A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints. Remote Sensing. 2025; 17(7):1275. https://doi.org/10.3390/rs17071275

Chicago/Turabian Style

Zhang, Qiming, Jinping Sun, Fei Teng, Yun Lin, and Wen Hong. 2025. "A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints" Remote Sensing 17, no. 7: 1275. https://doi.org/10.3390/rs17071275

APA Style

Zhang, Q., Sun, J., Teng, F., Lin, Y., & Hong, W. (2025). A Novel 3D Point Cloud Reconstruction Method for Single-Pass Circular SAR Based on Inverse Mapping with Target Contour Constraints. Remote Sensing, 17(7), 1275. https://doi.org/10.3390/rs17071275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop