Next Article in Journal
Land Subsidence Susceptibility Mapping in South Korea Using Machine Learning Algorithms
Previous Article in Journal
Visual Localizer: Outdoor Localization Based on ConvNet Descriptor and Global Optimization for Visually Impaired Pedestrians
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array

School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(8), 2477; https://doi.org/10.3390/s18082477
Submission received: 8 June 2018 / Revised: 9 July 2018 / Accepted: 25 July 2018 / Published: 31 July 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
Due to the non-contact detection ability of radar and the harmlessness of terahertz waves to the human body, three-dimensional (3D) imaging using terahertz synthetic aperture radar (SAR) is an efficient method of security detection in public areas. To achieve high-resolution and all aspect imaging, circular trajectory movement of radar and linear sensor array along the height direction were used in this study. However, the short wavelength of terahertz waves makes it practically impossible for the hardware to satisfy the half-wavelength spacing condition to avoid grating lobes. To solve this problem, a sparse linear array model based on the equivalent phase center principle was established. With the designed imaging geometry and corresponding echo signal model, a 3D imaging algorithm was derived. Firstly, the phase-preserving algorithm was adopted to obtain the 2D image of the ground plane for each sensor. Secondly, the sparse recovery method was applied to accomplish the scattering coefficient reconstruction along the height direction. After reconstruction of all the range-azimuth cells was accomplished, the final 3D image was obtained. Numerical simulations and experiments using terahertz radar were performed. The imaging results verify the effectiveness of the 3D imaging algorithm for the proposed model and validate the feasibility of terahertz radar applied in security detection.

1. Introduction

Terrorist attacks have increased substantially in recent years on a global scale, and they pose a great threat to public safety. Security detection in public areas is an important approach to guard against attacks in advance [1,2]. Compared with detection methods like metal detectors and manual checking, three-dimensional (3D) imaging techniques can provide more precise information, rendering automatic detection and identification possible. Recently, 3D synthetic aperture radar (SAR) imaging has attracted increasing research interest because of its ability to obtain the real spatial position and scattering properties of the targets  of interest [3,4,5,6,7]. On the other hand, terahertz (THz) waves have proved superior in their ability to penetrate clothes with inappreciable harm to the human body, and it is thereby suitable for concealed weapon detection. Besides, terahertz waves have a short wavelength and are able to generate wide bandwidths, which can achieve high-resolution imaging [8,9,10]. Therefore, 3D imaging using terahertz radar is a promising method in security detection.
Among various kinds of 3D imaging patterns, circular SAR (CSAR) is popular for its 3D imaging ability and full 360 ° observation with a single pass [11,12,13]. CSAR 3D imaging needs a pitch angle to realize resolution in the height direction, which may limit its application in some cases. Besides, because of the pitch angle, resolutions between range and height directions are coupled, which makes it difficult to improve both resolutions simultaneously. To address this issue, much research has been conducted by scholars, including studies on multipass SAR, linear array SAR, etc. [14,15,16,17,18]. In [16], holographic SAR (HoloSAR) was applied to form 3D images, where it combines the height reconstruction with the 2D imaging of multipass CSAR. Compared with single-pass CSAR 3D imaging, HoloSAR can reduce sidelobes and provide more height details. However, HoloSAR may face the nonuniform height-sampling problem, which reduces the image quality. The sensor array is another choice for 3D SAR imaging, where the array distributes along the needed direction to provide a sampling aperture. However, to satisfy the half-wavelength spacing condition to avoid grating lobes, a large number of sensors are needed, which leads to a complicated and costly system, especially for terahertz radar. Therefore, we need to reduce the number of sensors to make the radar system realizable.
A sparse array was considered for 3D SAR imaging, however, the traditional Fourier-based imaging method may be invalid in this condition [19,20]. Fortunately, the targets of interest in security detection behave with spatial sparsity generally, thus, the sparse recovery method can be employed to form the image. Among the multiple sparse recovery algorithms, sparse Bayesian learning (SBL) has superiority to deterministic algorithms due to its characteristic of incorporating prior knowledge [21,22]. Without knowing the signal sparsity, SBL estimates the distribution parameters and the posterior with iteration, which can obtain more stable solutions. In Ref. [23], SBL was applied to through-wall imaging with missing data measurements, and in [24], a local structural SBL was adopted to form ISAR images of maneuvering targets with compressively sampled data. The desired results were obtained in both of the studies. Therefore, SBL was used for height reconstruction in this study due to its favorable performance.
In this work, the imaging geometry of CSAR with sparse linear sensor array was first established, and the echo signal model in the terahertz band was developed. Then, a 2D ground plane imaging procedure using phase-preserving algorithm was deduced for each sensor. After that, the phase features of the 2D images were analyzed, and reconstruction along the height direction was transformed into a sparse recovery problem. After completing the reconstructions of all the range-azimuth cells, the 3D image was obtained. The rest of this paper is organized as follows. In Section 2, the 3D imaging geometry with sparse linear sensor array is provided and the imaging procedure is described. In Section 3, simulations and experiments are presented and analyzed to demonstrate the effectiveness of the 3D imaging method for the proposed imaging model. Conclusions are given in the last section.

2. 3D Imaging Methodology

2.1. Imaging Geometry

There are several different transmitting-receiving models for a sensor array, such as single-transmitter multiple-receiver (STMR) and multiple-transmitter multiple-receiver (MTMR). Based on the equivalent phase center principle, it can be considered equivalent to the model that treats each sensor transmits signal and receives echo independently. The specific transmitting-receiving pattern design is not discussed in this paper, and the equivalent distribution of the sensor array is presented directly. The imaging geometry of CSAR with a sparse linear sensor array is shown in Figure 1, where the radar system consists of a vertical sparse linear array operating at a terahertz band. The sensor array spreads parallel to the Z-axis and moves along a circular trajectory with radius R 0 . Assuming that there are a total of M equivalent sensors whose Z-axis coordinates are v ¯ m , m = 1 , , M , respectively, when the radar system moves to the azimuth angle θ , the instantaneous coordinate of the equivalent sensor m is ( R 0 cos θ , R 0 sin θ , v ¯ m ) . It is assumed that the target scattering is isotropic and the sensors can illuminate the region of interest during the entire sampling time.
In general, a target can be approximated as a sum of several independent and non-directional scattering centers in SAR imaging [8]. Therefore, the echo of the targets can be expressed as the superposition of echoes of all scattering centers. Considering a general scattering center P with coordinate ( x P , y P , z P ) , which is plotted in Figure 1, the instantaneous distance from the equivalent sensor m to the scattering center is given as
r m θ = ( R 0 cos θ x P ) 2 + ( R 0 sin θ y P ) 2 + ( v ¯ m z P ) 2
With the assumption that the target is in the far-field, the fourth and higher-order terms can be neglected for the Taylor series expansion of the cosine term. The distance expression in Equation (1) can be approximated as
r m θ = 2 R c m v ¯ m z P R c m R 0 R c m x P cos θ + y P sin θ
where R c m = R 0 2 + v ¯ m 2 is the distance from the equivalent sensor m to the origin O .

2.2. 2D Phase-Preserving Imaging Method

For terahertz radar, broadband can be realized, and the pulsed linear frequency modulated (LFM) signal using the dechirp-on-receive technique is appropriate for high-resolution imaging [6,8]. The transmitted radar signal is presented as
s τ = r e c t τ T exp j 2 π f 0 τ + j π γ τ 2
where τ is the fast time, T is the pulse duration, f 0 is the center frequency, and γ is the modulated rate. The rectangular window is defined as r e c t τ / T = 1 for T / 2 < τ < T / 2 and = 0 otherwise.
For scattering center P, the echo received by sensor m is expressed as
s b τ , θ ; m = ρ P r e c t τ τ d T exp j 2 π f 0 τ τ d + j π γ τ τ d 2
where ρ P is the scattering coefficient of P, and τ d = r m θ / c is time delay, where c is light speed.
For broadband signal, the matched filtering (MF) method will need a high sampling rate to satisfy the Nyquist sampling criterion, which could increase the data size remarkably. Nevertheless, the dechirp technique is able to overcome this difficulty and is used extensively in terahertz radar signal processing. The dechirped signal of Equation (4) is
s i f τ , θ ; m = ρ P r e c t τ τ d T · exp j 2 π c f 0 + γ τ · r m θ · exp j π γ r m θ c 2
In Equation (5), the second exponential term is the residual video phase (RVP), and it can be removed before the next steps [25].
Next, the signal will be transformed into wavenumber domain. Let K = 2 π f 0 + γ τ / c , K 0 = 2 π f 0 / c , and Δ K = 2 π γ T / c represent the signal wavenumber, center wavenumber, and bandwidth wavenumber, respectively. Substituting (2) into (5), we get
s i f K , θ ; φ m = ρ P r e c t K K 0 Δ K · exp j 2 K R c m z P sin φ m cos φ m x P cos θ + y P sin θ
where φ m represents the pitch angle between the equivalent sensor and the X O Y plane and satisfies cos φ m = R 0 / R c m , sin φ m = v ¯ m / R c m .
In Equation (6), for a determined equivalent sensor m, the first phase term can be compensated with compensating factor H c = exp j 2 K R c m . The wavenumber of K can be decomposed as follows
K x = 2 K cos φ m cos θ K y = 2 K cos φ m sin θ
By substituting (7) into (6), the signal can be represented as
s i f K x , K y ; φ m = ρ P r e c t K K 0 Δ K · exp j 2 K z P sin φ m · exp j K x x P + j K y y P
As mentioned in Section 2.1, the echo of the entire region of interest is the superposition of all the scattering centers. Thus, the radar echo received by the mth sensor is expressed as
s a K x , K y ; φ m = ρ ( x , y , z ) · r e c t K K 0 Δ K · exp j 2 K z sin φ m · exp j K x x + j K y y d x d y d z
where ρ ( x , y , z ) is the scattering coefficient of a scattering center located at ( x , y , z ) .
The triple integral in Equation (9) can be decomposed into two parts, one is a double integral about d x d y , the other is single integral about d z . Since the window function in (9) has no direct impact on the phase analysis, it can be neglected for convenience in the following. Then, the signal can be represented as
s a K x , K y ; φ m = ρ x , y , z · exp j K x x + j K y y d x d y · exp j 2 K z sin φ m d z
Note that the double integral in Equation (10) is the 2D Fourier transform, thus, the 2D inverse Fourier transform can be performed to both sides of (10). After the 2D inverse Fourier transform, the scattering distribution for each x , y cell is obtained:
s a x , y ; φ m = ρ x , y , z · exp j 2 K z sin φ m d z
Equation (11) indicates that for a determined cell x , y , the 2D imaging result is the correlation stack of the scattering coefficients of all the scattering centers with different heights z. Therefore, to obtain the 3D image, we need to reconstruct the height location of the scattering centers.

2.3. Scattering Coefficient Reconstruction along the Height Direction

To avoid the grating lobes, the dense linear sensor array needs to satisfy the half-wavelength spacing condition. However, the wavelength of a terahertz wave is as short as less than 1 mm generally, thus, integration in practical implementation is almost impossible. Therefore, the sparse sensor array along the Z-axis is implemented. However, this makes the Fourier-based signal processing invalid. Considering that there are limited dominated scattering centers in the height direction, the sparse reconstruction method can provide a solution to recover the scattering coefficient distribution. The data corresponding to the same pixel cell ( g , l ) of the multiple 2D complex images are extracted and rearranged in a vector as the measurement data. This manipulation is shown in Figure 2.
In discrete signal processing, Equation (11) can be rewritten as
s x g , y l , φ m = q = 1 Q ρ q exp j 2 K z q sin φ m
where Q is the number of grids divided along the height direction, and the values of the grids are z = z 1 , z 2 , , z Q T , where · T denotes the transpose. Furthermore, Equation (12) can be presented as the vector form s g l φ m = w m ρ g l , where w m = exp j 2 K sin φ m z T C 1 × Q , ρ q = ρ 1 , ρ 2 , , ρ Q , and ρ q represent the scattering coefficients of the targets at grids z to be recovered. Therefore, for all φ m , we have s g l φ m = W ρ g l , where s g l = s g l φ 1 , s g l φ 2 , , s g l φ M T C M × 1 , W = w 1 , w 2 , , w M T C M × Q . Since we will reconstruct the height information pixel by pixel, the subscript g , l will be omitted for brevity in the following. With additive white Gaussian noise N C M × 1 whose mean is zero and variance is β , we obtain
s = W ρ + N
Equation (13) can be exhibited in Figure 3.
Since Q > M , the solution of Equation (13) is underdetermined. Generally, the targets of interest in security detection behave with spatial sparsity, which means that only a few values of ρ shown in Figure 3 are meaningful. Thus, the solving of Equation (13) can be transferred to an optimization problem as
min ρ ρ , s . t . s W ρ δ
where δ is a proper bound.
To solve the problem in Equation (14), sparse recovery algorithms can be adopted. In SAR imaging, the number of dominated scattering centers are generally unknown, thus, the algorithms based on the sparsity of the signal are impracticable. As analyzed in [22,23,24], sparse Bayesian learning (SBL) is an effective approach for sparse recovery. Compared with other l 1 -norm minimization algorithms, the SBL-based algorithm adopts iteration during the procedure to avoid convergence to the local minimum and produces a full posterior distribution as the solution. Based on the Gaussian scale mixture (GSM), the prior probability of the distribution of scattering coefficient ρ is represented as [23]:
p ρ | υ = q = 1 Q 1 2 π υ q exp ρ q 2 2 υ q
where υ = υ 1 , , υ q , , υ Q is the hyperparameter that governs ρ .
Once the scattering coefficients are given, the probability of the measurement vector is determined by the noise, which is given by
p s | ρ ; β = 1 2 π β M exp 1 2 β s W ρ 2
With the prior probability of ρ and the conditional probability of s , the marginal probability of s is yielded by
p s | υ ; β = p s | ρ ; β p ρ | υ d ρ = 1 2 π M 1 Σ I exp 1 2 s T Σ I 1 s
where Σ I = β I + W Λ W T , I is the identity matrix, Λ = d i a g ( υ ) .
Based on the Bayesian Theorem, the posterior probability of ρ is
p ρ | s , υ = p s | ρ ; β p ρ | υ p s | υ ; β = 1 2 π Q 1 Σ ρ exp 1 2 ρ T Σ ρ 1 d · I ρ
where
d = β 1 Σ ρ W T s Σ ρ = β 1 W T W + Λ 1 1
From (18), we can see that the posterior probability is a Gaussian distribution with mean d and variance Σ ρ , which are represented in (19). For Gaussian distribution, the mean value can be treated as the estimated scattering coefficient, i.e., the estimate ρ ^ = d . It can be seen that the estimate of the scattering coefficient is relative to the noise variance β and hyperparameter υ . For traditional SBL, β and υ can be estimated with expectation–maximization (EM) steps [22]. However, the noise variance β is a nuisance parameter and the estimate may be highly inaccurate [22]. According to [23], β can be integrated out by introducing a prior distribution, which is a gamma distribution given by
p β | a , b = G a m β | a , b = 1 Γ a b a β a 1 exp b β
where a and b are deterministic small value, and Γ a is a gamma function defined as Γ a = 0 u a 1 exp u d u . With the prior probability of β , the posterior of ρ is yielded as
p ρ | s , υ = Γ a + Q 1 + ρ d ¯ H Σ ¯ ρ 1 ρ d ¯ / b a + L Γ a π b Q Σ ¯
The posterior distribution given by (21) is a multivariate complex Student’s t-distribution, whose mean and covariance are given by
d ¯ = Σ ¯ ρ W T s Σ ¯ ρ = W T W + Λ 1 1
Like Gaussian distribution, the mean value of Student’s t-distribution can also be treated as the estimated scattering coefficient, i.e., the estimate ρ ^ = d ¯ . We can see that the estimate is merely a function of the hyperparameter rather than the noise variance β . Therefore, in order to obtain the estimate of the scattering coefficient, we have to estimate the hyperparameter υ . Using the EM approach in SBL, υ can be obtained by maximizing the marginal probability of s , which is arg max υ p s | υ . The marginal probability is yielded by
p s | υ = p s | ρ ; β · p ρ | υ · p β | a , b d ρ d β = b a b + s H G 1 s M + a Γ M + a Γ a π M G
where G = I + W Λ W H .
Taking the logarithm of p s | υ and neglecting the terms that are independent of hyperparameter  υ , the cost function is represented as
L υ = M + a ln s H G 1 s ln G
Differentiating L υ with respect to υ q , and setting the results to zero, we can obtain
υ q = 1 M + a d ¯ q / b + s H G 1 s + σ ¯ q , q
where d ¯ q is the qth component of d ¯ and σ ¯ q , q is the qth diagonal component of Σ ¯ ρ .
From Equation (25), we can see that the hyperparameter υ is a function of mean d ¯ and covariance Σ ¯ ρ . Meanwhile, Equation (22) indicates that d ¯ and Σ ¯ ρ are functions of υ . Therefore, once the initial values of υ , a, and b are given, an iterative procedure can be implemented between (22) and (25) to obtain the estimates of d ¯ , Σ ¯ ρ , and υ . Furthermore, the scattering coefficient estimate is obtained by ρ ^ = d ¯ . The processing procedure of the SBL mentioned above is shown in Algorithm 1.
Algorithm 1: Processing procedure of sparse Bayesian learning.
Input:
  2D image vector s , matrix W ;
  noise parameter a , b ;
  initial hyperparamter υ ;
  stop value δ ;
Output:
 mean d ¯ , covariance Σ ¯ ρ ;
 1: BEGIN
 2: initialize step i = 1 ;
 3: do
 4: Compute the estimated mean d ¯ ( i ) and covariance Σ ¯ ρ ( i ) using Equation (22);
 5: Update d ¯ = d ¯ ( i ) , Σ ¯ ρ = Σ ¯ ρ ( i )
 6: Calculate hyperparameter υ q ( i ) , q = 1 , 2 , , Q using Equation (25);
 7:  i = i + 1 ;
 8: while
 9:  L υ i L υ i 1 L υ i 1 > δ ;
 10: end do
 11: return  d ¯ , Σ ¯ ρ .
 12: END
Finally, combining the 2D imaging procedure with the reconstruction of the scattering coefficient along the height direction, the overall flowchart for 3D imaging is shown in Figure 4.

3. Simulations and Experiments

3.1. Simulation Results and Analysis

In this section, we present numerical simulations used to verify the effectiveness of the proposed imaging model and sparse recovery method. In the simulations, the radar center frequency is set as 340 GHz based on the atmospheric transmission window theory. A benefit pf the high center frequency is that a large bandwidth can be achieved for terahertz radar, which is set as 28 GHz here. The other main simulation system parameters are also listed in Table 1.
The simulated imaging scene is shown in Figure 5a, where there contains 10 point-scattering centers in three different planes. In Figure 5a, the points with the same color are located in the same plane. The 2D imaging method is first applied to each equivalent sensor to obtain the 2D image. After that, the sparse reconstruction algorithm is adopted to recover the scattering coefficients along the height direction for each range-azimuth cell. The final 3D imaging result using the sparse recovery method is presented in Figure 5b. Compared with Figure 5a, it can be seen that the point scattering centers are well reconstructed in the 3D space. The 3D imaging result using the MF method with sparse linear sensor array is also presented, which is shown in Figure 5c. It can be seen that the points with the same range-azimuth cell cannot be distinguished along the height direction. Moreover, the imaging result using the MF method with dense sensor array that satisfies the half-wavelength spacing condition is given in Figure 5d. It shows that the points are recovered with acceptable sidelobes.
To further exhibit the imaging results, the height profiles at the range-azimuth cell (0,0) are provided in Figure 6, where the red line, black line, and blue line correspond to the imaging results of spare recovery with sparse sensor array, MF method with sparse sensor array, and MF method with dense sensor array, respectively. In the original simulated scene, there are two points in the cell (0,0) with different heights. Figure 6 shows that the blue line has two main lobes at z = 0.05 m and z = 0.05 m, while the red line contains two evident peaks that conform to the main lobes. It should be declared that the sparse recovery is a super-resolution imaging method, thus the height profile has no sidelobes. On the other hand, the imaging result using MF with sparse sensor array is messy and has no prominent peaks. Therefore, the simulations demonstrate that the sparse recovery can reconstruct the height location of scattering centers with sparse linear sensor array.

3.2. Experiment Results and Analysis

With 0.34 THz radar, the equivalent experiment was conducted to validate the proposed imaging model and 3D imaging method. The system parameters are the same with those of the simulation shown in Table 1. The experimental scenario is presented in Figure 7. In Figure 7a, the radar is fixed while the targets are placed on a rotating platform, which is equal to the radar moving in a circular trajectory while the targets stand still. Since our radar system has no sensor array, we raise the platform step-by-step to form the relative height changing between the sensor and targets. The platform is precisely controlled to make sure that it rotates from the same original angle of each height step. Figure 7b is the optical photo of the terahertz radar with a single pair of transmitting-receiving antennas. More specified descriptions for this radar system, such as the schematic diagram and front-end setup, are presented in [8,9]. Figure 7c shows the optical photo of the targets, which contain five metallic balls in three different planes.
The experimental results are shown in Figure 8, which contains three 2D slices corresponding to the planes, whose heights are z = 0.02 m, 0 m, and 0.02 m, respectively. The yellow circles in each figure mark the locations of the metallic balls. Comparing Figure 7c with Figure 8a, we can see that only the two balls in Figure 7c, whose relative heights are in accord with the height z = 0.02 m, appear remarkably in Figure 8a. Similarly, this phenomenon is also presented in Figure 8b,c. It should be noted that the regions in red squares are noise generated by the crabsticks. Therefore, the experimental results demonstrate the 3D imaging ability of the proposed imaging model and sparse recovery algorithm.

4. Conclusions

For the application of security detection of the human body in public areas, a 3D imaging model that combines CSAR with sparse linear sensor array was established in this study. Based on the CSAR model, 360 ° aspect 2D high-resolution imaging was implemented using a phase-preserving algorithm. Due to the short wavelength of terahertz waves, the sparse linear sensor array was used to form the resolution in height direction. With the obtained complex 2D images, the sparse recovery method, which is an SBL-based algorithm, was applied to obtain scattering coefficient reconstruction along the height direction. After accomplishing the reconstructions of all the range-azimuth cells, the final 3D image was obtained. Numerical simulation was performed to verify the high-resolution imaging ability of the proposed imaging model and algorithm. Furthermore, we used 0.34 THz radar to carry out the equivalent experiments, and the results demonstrate that the terahertz radar can be applied in high-resolution imaging, which indicates that terahertz radar imaging is suitable for security detection of humans in public areas. However, the imaging process described in this paper used the isotropic assumption of the targets, which may not always be true in practice, and the imaging method for aspect-dependent targets needs to be further researched.

Author Contributions

Conceptualization, J.H. and J.L.; Methodology, J.H.; Software, J.H.; Validation, J.H., J.L. and Y.P.; Formal Analysis, J.H.; Investigation, J.H.; Resources, J.L.; Data Curation, J.L.; Writing-Original Draft Preparation, J.H.; Writing-Review & Editing, J.L.; Visualization, J.H.; Supervision, J.L.; Project Administration, Y.P.; Funding Acquisition, Y.P.

Funding

This work was supported in part by the Project 2011 for Collaborative Innovation Center of Information Sensing and Understanding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agarwal, S.; Kumar, B.; Singh, D. Non-invasive concealed weapon detection and identification using V band millimeter wave imaging radar system. In Proceedings of the 2015 National Conference on Recent Advances in Electronics & Computer Engineering (RAECE), Roorkee, India, 13–15 February 2016; pp. 258–262. [Google Scholar]
  2. Sheen, D.M.; McMakin, D.L.; Hall, T.E. Active millimeter-wave and sub-millimeter-wave imaging for security applications. In Proceedings of the 2011 International Conference on Infrared, Millimeter, and Terahertz Waves, Houston, TX, USA, 2–7 October 2011; Volume 47, pp. 1–3. [Google Scholar]
  3. Zhang, R.; Cao, S. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar. Sensors 2017, 17, 1419. [Google Scholar] [CrossRef] [PubMed]
  4. Bao, Q.; Jiang, C.; Lin, Y.; Tan, W.; Wang, Z.; Hong, W. Measurement Matrix Optimization and Mismatch Problem Compensation for DLSLA 3-D SAR Cross-Track Reconstruction. Sensors 2016, 16, 1333. [Google Scholar] [CrossRef] [PubMed]
  5. Baccouche, B.; Agostini, P.; Mohammadzadeh, S.; Kahl, M.; Weisenstein, C.; Jonuscheit, J. Three-dimensional terahertz imaging with sparse multistatic line arrays. IEEE J. Sel. Top. Quantum Electron. 2017, 23, 1–11. [Google Scholar] [CrossRef]
  6. Yang, X.; Pi, Y.; Liu, T. Three-Dimensional Imaging of Spinning Space Debris Based on the Broadband Radar. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1027–1031. [Google Scholar] [CrossRef]
  7. Sasaki, Y.; Shang, F.; Kidera, S.; Kirimoto, T.; Saho, K.; Sato, T. Three-dimensional imaging method incorporating range points migration and doppler velocity estimation for UWB millimeter-wave radar. IEEE Geosci. Remote Sens. Lett. 2017, 14, 122–126. [Google Scholar] [CrossRef]
  8. Zhang, B.; Pi, Y.; Li, J. Terahertz imaging radar with inverse aperture synthesis techniques: System structure, signal processing, and experiment results. IEEE Sens. J. 2015, 15, 290–299. [Google Scholar] [CrossRef]
  9. Yang, X.; Pi, Y.; Liu, T.; Wang, H. Three-dimensional imaging of space debris with space-based terahertz radar. IEEE Sens. J. 2017, 18, 1063–1072. [Google Scholar] [CrossRef]
  10. Yang, Q.; Deng, B.; Wang, H.; Qin, Y.; Zhang, Y. Envelope Correction of Micro-Motion Targets in the Terahertz ISAR Imaging. Sensors 2018, 18, 228. [Google Scholar] [CrossRef] [PubMed]
  11. Soumekh, M. Reconnaissance with slant plane circular SAR imaging. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 1996, 5, 1252–1265. [Google Scholar] [CrossRef] [PubMed]
  12. Dallinger, A.; Schelkshorn, S.; Detlefsen, J. Efficient ω-k algorithm for circular SAR and cylindrical reconstruction areas. Adv. Radio Sci. 2006, 4, 85–91. [Google Scholar] [CrossRef]
  13. Kou, L.; Wang, X.; Chonq, J.; Xianq, M.; Zhu, M. Circular SAR processing using an improved omega-k type algorithm. J. Syst. Eng. Electron. 2010, 21, 572–579. [Google Scholar] [CrossRef]
  14. Jiao, Z.; Ding, C.; Liang, X.; Chen, L.; Zhang, F. Sparse Bayesian Learning Based Three-Dimensional Imaging Algorithm for Off-Grid Air Targets in MIMO Radar Array. Remote Sens. 2018, 10, 369. [Google Scholar] [CrossRef]
  15. Ertin, E.; Moses, R.L.; Potter, L.C. Interferometric methods for three-dimensional target reconstruction with multipass circular SAR. IET Radar Sonar Navig. 2010, 4, 464–473. [Google Scholar] [CrossRef]
  16. Bao, Q.; Lin, Y.; Hong, W.; Shen, W.; Zhao, Y.; Peng, X. Holographic SAR tomography image reconstruction by combination of adaptive imaging and sparse bayesian inference. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1248–1252. [Google Scholar] [CrossRef]
  17. Tian, H.; Li, D. Sparse flight array SAR downward-looking 3D imaging based on compressed sensing. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1395–1399. [Google Scholar] [CrossRef]
  18. Hu, C.; Wang, J.; Tian, W.; Zeng, T.; Wang, R. Design and Imaging of Ground-Based Multiple-Input Multiple-Output Synthetic Aperture Radar (MIMO SAR) with Non-Collinear Arrays. Sensors 2017, 17, 598. [Google Scholar] [CrossRef] [PubMed]
  19. Sun, C.; Wang, B.; Fang, Y.; Song, Z.; Wang, S. Multichannel and Wide-Angle SAR Imaging Based on Compressed Sensing. Sensors 2017, 17, 295. [Google Scholar] [CrossRef] [PubMed]
  20. Hu, X.; Tong, N.; Zhang, Y.; Hu, G. Multiple-input–multiple-output radar super-resolution three-dimensional imaging based on a dimension-reduction compressive sensing. IET Radar Sonar Navig. 2016, 10, 757–764. [Google Scholar] [CrossRef]
  21. He, X.; Tong, N.; Hu, X. Superresolution radar imaging based on fast inverse-free sparse Bayesian learning for multiple measurement vectors. J. Appl. Remote Sens. 2018, 12, 015013. [Google Scholar]
  22. Al-Shoukairi, M.; Schniter, P.; Rao, B.D. A gamp based low complexity sparse bayesian learning algorithm. IEEE Trans. Signal Process. 2018, 66, 294–308. [Google Scholar] [CrossRef]
  23. Tang, V.H.; Bouzerdoum, A.; Phung, S.L.; Tivive, F.H.C. A sparse bayesian learning approach for through-wall radar imaging of stationary targets. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2485–2501. [Google Scholar] [CrossRef]
  24. Xu, G.; Yang, L.; Bi, G.; Xing, M. Enhanced ISAR imaging and motion estimation with parametric and dynamic sparse bayesian learning. IEEE Trans. Comput. Imaging 2017, 3, 940–952. [Google Scholar] [CrossRef]
  25. Majewski, R.M.; Carrara, R.; Goodman, R. Spotlight Synthetic Aperture Radar: Signal Processing Algorithms; Artech House: London, UK, 1995. [Google Scholar]
Figure 1. Imaging geometry of circular synthetic aperture radar (CSAR) with sparse linear sensor array.
Figure 1. Imaging geometry of circular synthetic aperture radar (CSAR) with sparse linear sensor array.
Sensors 18 02477 g001
Figure 2. Data corresponding to the same pixel ( g , l ) are extracted and rearranged in a vector.
Figure 2. Data corresponding to the same pixel ( g , l ) are extracted and rearranged in a vector.
Sensors 18 02477 g002
Figure 3. Graphical representation of Equation (13). The blank squares indicate zeros, while the colored ones indicate non-zero values.
Figure 3. Graphical representation of Equation (13). The blank squares indicate zeros, while the colored ones indicate non-zero values.
Sensors 18 02477 g003
Figure 4. 3D imaging flowchart.
Figure 4. 3D imaging flowchart.
Sensors 18 02477 g004
Figure 5. Imaging scene and 3D imaging results. Dotted lines link the points in the same plane. (a) Original imaging scene; (b) imaging result using sparse recovery with sparse sensor array; (c) imaging result using MF with sparse sensor array; (d) imaging result using MF with dense sensor array.
Figure 5. Imaging scene and 3D imaging results. Dotted lines link the points in the same plane. (a) Original imaging scene; (b) imaging result using sparse recovery with sparse sensor array; (c) imaging result using MF with sparse sensor array; (d) imaging result using MF with dense sensor array.
Sensors 18 02477 g005
Figure 6. Height profile of the range-azimuth cell (0,0).
Figure 6. Height profile of the range-azimuth cell (0,0).
Sensors 18 02477 g006
Figure 7. Experimental scenario. (a) Diagram of imaging scene; (b) photo of the terahertz radar system; (c) photo of targets.
Figure 7. Experimental scenario. (a) Diagram of imaging scene; (b) photo of the terahertz radar system; (c) photo of targets.
Sensors 18 02477 g007
Figure 8. 2D imaging slices of heights at (a) z = 0.02 m; (b) z = 0 m; (c) z = 0.02 m. The yellow circles mark the locations of the balls for each slice, while the regions in red squares are noise generated by the crabsticks.
Figure 8. 2D imaging slices of heights at (a) z = 0.02 m; (b) z = 0 m; (c) z = 0.02 m. The yellow circles mark the locations of the balls for each slice, while the regions in red squares are noise generated by the crabsticks.
Sensors 18 02477 g008
Table 1. Main system parameters
Table 1. Main system parameters
ParametersValues
Carrier frequency340 GHz
Bandwidth28 GHz
Pulse duration time1 ms
Radar radius5 m
Equivalent receiving antenna number9
Aperture length in height direction0.2 m

Share and Cite

MDPI and ACS Style

Hao, J.; Li, J.; Pi, Y. Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array. Sensors 2018, 18, 2477. https://doi.org/10.3390/s18082477

AMA Style

Hao J, Li J, Pi Y. Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array. Sensors. 2018; 18(8):2477. https://doi.org/10.3390/s18082477

Chicago/Turabian Style

Hao, Jubo, Jin Li, and Yiming Pi. 2018. "Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array" Sensors 18, no. 8: 2477. https://doi.org/10.3390/s18082477

APA Style

Hao, J., Li, J., & Pi, Y. (2018). Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array. Sensors, 18(8), 2477. https://doi.org/10.3390/s18082477

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop