Next Article in Journal
A Green Microbial Fuel Cell-Based Biosensor for In Situ Chromium (VI) Measurement in Electroplating Wastewater
Next Article in Special Issue
Investigation of Wavenumber Domain Imaging Algorithm for Ground-Based Arc Array SAR
Previous Article in Journal
Modeling of Thermal Phase Noise in a Solid Core Photonic Crystal Fiber-Optic Gyroscope
Previous Article in Special Issue
A Fast Terahertz Imaging Method Using Sparse Rotating Array
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm

1
School of Software, Xidian University, Xi’an 710071, China
2
National Laboratory of Radar Signal Processing, Collaborative Innovation Center of Information Sensing and Understanding, Xidian University, Xi’an 710071, China
3
Beijing Institute of Radio Measurement, The Second Academy of China Aerospace Science and Industry Corporation (CASIC), Beijing 100854, China
*
Authors to whom correspondence should be addressed.
Sensors 2017, 17(11), 2454; https://doi.org/10.3390/s17112454
Submission received: 20 August 2017 / Revised: 12 October 2017 / Accepted: 24 October 2017 / Published: 26 October 2017

Abstract

:
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

1. Introduction

The atmospheric turbulence disturbs the ideal trajectory of aircraft during the whole process of flight, and this causes not only serious blurring, but also geometric distortion of the synthetic aperture radar (SAR) [1,2,3] imagery. Therefore, motion compensation (MOCO) [4,5,6] is an essential processing procedure for airborne SAR imaging. For the efficiency and accuracy of MOCO, a high-precision inertial navigation system (INS) is commonly mounted on the platform to record the real-time velocity and position information. Therefore, the MOCO accuracy prominently relies on the MOCO strategies., Effectively compensating the residual motion error is still a problem worth studying, especially for millimeter-wave band SAR imagery.
As analyzed in [7], it is clear that the main difficulty of MOCO is sourced from the space-variance of the motion error. The conventional two-step MOCO method [6] is proposed to compensate the range-variant motion error, which is widely applied by embedding MOCO into the SAR algorithms [8]. However, the two-step MOCO method neglects the residual azimuth-variant motion errors, thus the focusing performance decreases for SAR imagery with wide beam and high resolution, especially when the atmospheric turbulence is severe. In order to solve this problem, several effective algorithms [9,10,11,12,13,14,15] have been developed in the current literature, which compensate for the residual azimuth-variant motion error in different ways. The sub-aperture topography- and aperture-dependent algorithm (SATA) [10,11] and precise topography- and aperture-dependent motion compensation algorithm (PTA) [13], are typical examples. Sliding sub-aperture processing is an essential tool in both of the azimuth-variant MOCO strategies, which calculates the residual azimuth-variant motion error relative to the center of each sub-aperture. A set of spatial filters are then established to remove the residual motion errors. In SATA, the time-varying residual error within an azimuth-time sub-aperture is approximated as a constant, and the residual motion errors are corrected in the Doppler domain under a Doppler-to-angle map. On the other hand, PTA corrects the residual spatial-variant motion errors in the image domain at a price of some efficiency losses. In PTA, the coarse-focused image is divided into overlapping sub-blocks, and the residual motion error relative to every block center is compensated for by a post-filtering strategy. Because of its high accuracy, PTA is one of the most widely used algorithms for azimuth-variant MOCO in real airborne SAR imagery. In general, PTA performs well in normal conditions, such as that when the motion errors are not very severe. However, a significant problem arises in the case of strong motion errors, such as SAR imaging under unmanned aerial vehicle (UAV) [16,17] platforms or serious atmospheric situations. In these cases, PTA would evidently expose its shortcoming. Image domain post-filtering of PTA needs to segment the image into sub-blocks. Because this strategy compensates the azimuth-variant motion error block-to-block, it confronts three main problems when dealing with sever motion errors. Firstly, energy diffusion of point targets is more severe in image domain, therefore, the sub-block size needs to be extended to make sure that the whole diffused energy of each effective target point is included in either of the adjacent sub-blocks. Otherwise, the target cannot be fully refocused with partial defocused energy in each sub-block, together with ghost shadows after block stitching. Secondly, image domain post-filtering assumes that azimuth-variant motion errors in one azimuth block are the same as the block center point; this hypothesis is not consistent with strong motion errors. Therefore, the robustness of PTA would evidently decrease. A solution is to extend the overlapping part between adjacent sub-blocks. Thirdly, with the significant extension of sub-block size and overlapping ratio between the neighboring sub-blocks for PTA post-filtering, the post-filtering strategy of PTA faces an increasing calculation burden, which is approximately as complex as conventional the back-projection algorithm. Inevitably, PTA has to make a balance between image quality reduction and increased calculation. The motivation behind the current study is the desperate need for an imaging method with both high precision and high efficiency for practical applications with strong motion errors.
Aiming to solve the PTA problems mentioned above, a frequency domain fast back-projection algorithm (FDFBPA) is proposed in this paper. Instead of post-filtering for a sub-block in the image domain, FDFBPA compensates the residual azimuth-variant motion errors by precisely calculating the azimuth matched filtering (AMF) [15] function, and using the fast back-projection process in the azimuth wavenumber domain. FDFBPA could be thought of as an extension of previous work [18,19], to achieve an effective spatial-variant MOCO. In FDFBPA, a precise AMF function with motion errors is derived. The spectrum is uniformly partitioned in the azimuth wavenumber domain and each sub-aperture is back-projected to obtain a set of coarse resolution images. Then, sub-images are fused in the azimuth wavenumber domain in order to achieve a full resolution image. Moreover, by introducing a linear Doppler approximation in the AMF, sub-aperture back-projection integral is implemented by the fast chirp-Z transform (CZT) [20,21], yielding promising efficiency enhancement for the algorithm. Compared with PTA, FDFBPA promises fully focused images with high efficiency and robustness, and is suitable for real airborne SAR imagery.
We organized this paper as follows: Section 2 gives the geometry model and calculates the precise expression of signal with residual azimuth-variant motion error in azimuth wavenumber domain, and illustrates the shortcoming of post-filtering algorithms; in Section 3, the principle of FDFBPA is illustrated in detail; Section 4 shows a flowchart of the proposed FDFBPA and analyzes computation burden; in Section 5, extensive experimental results with both simulated and real Ka-band airborne SAR data are given; conclusions are drawn in the last section.

2. Signal Model and Conventional Post-Filtering Algorithms

Real SAR imaging geometry is given in Figure 1. The geometry is defined in a rectangular system as O X Y Z , where O is the origin of coordinates and X , Y and Z indicate the long track direction, cross track direction and height direction, respectively. In an ideal case, radar platform travels along a straight line with a constant velocity V , shown as the solid line along the track. Because of atmospheric turbulence, real trajectory is a curve shown as the dotted line across the track. During the data acquisition process, a radar beam illuminates the ground with a squint angle φ . Point C denotes the scene center with the long track coordinate as x 0 , and the distance between O and C is indicated as r . Symbol P stands for the target located on the scene center line O C , which is parallel to the trajectory. Distance between target P and scene center C is given by x . The echo expression from P is given by [22]:
s ( τ , X ) = ε p rect ( τ Δ t T p ) rect ( X x 0 x L ) exp [ j 2 π ( f c Δ t + α ( τ Δ t ) 2 2 ) ]
where, ε p corresponds to the complex valued scattering amplitude of the point target, τ denotes the range fast-time, T p denotes the pulse duration width, L denotes the synthetic aperture length, f c is the center frequency, α is the signal chirp rate, Δ t = 2 R n / c stands for the round-way propagation time between target P and radar and c denotes the speed of light. Symbol r e c t ( ) denotes the rectangular window function and X represents the along track position of flight. Ignoring the effects of motion error, the slant range R n is expressed as:
R n ( X , x , r ) = ( r cos φ ) 2 + ( X x 0 x ) 2
Actually, the real slant range deviates from the ideal R n for non-ignorable motion error, which is shown as R ˜ n in Figure 1. The deviation blurred slant range R ˜ n contains four main components: ideal R n , range- and azimuth-invariant motion error Δ r , range-variant motion error Δ r b and azimuth-variant motion error Δ r ε . In order to compensate for motion errors during the imaging process, a two-step MOCO [6] method is widely used. It achieves range-dependent MOCO for raw data, where the first step is bulk MOCO to compensate, and the second step is range-variant MOCO to compensate. However, the remaining azimuth-variant motion error is a considerable factor, especially for SAR systems with a wide aperture in azimuth. It is not difficult to understand azimuth-variant motion error. In Figure 1, it is shown that R n and R ˜ n are ideal and real slant range from aircraft to point P, while R n and R ˜ n are ideal and real slant range from aircraft to scene center C. The two-step MOCO method compensates for the whole scene with motion error of Δ R = R ˜ n R n . However, with respect to point P, the actual motion error is Δ R = R ˜ n R n . It is obvious that Δ R Δ R because of the different projection directions of aircraft deviation, which is the cause of azimuth-variant motion errors. We can calculate the azimuth-variant motion error Δ r ε by Δ r ε = Δ R Δ R .
Ignoring the envelope terms of Equation (1), after processing by range cell migration correction (RCMC) [3] and two-step MOCO, the expression of range compressed signal is expressed by [14]:
s t ( X , x , r ) = exp { j K r c [ R n ( X , x , r ) + Δ r ε ( X , x , r ) ] }
where K r c = 4 π / λ , λ denotes the wavelength and Δ r ε is the residual azimuth-variant motion error. We omitted the detailed derivation procedures of RCMC, two-step MOCO and range compression from (1) to (3), which are illustrated in [3]. The azimuth wavenumber domain signal would be obtained through an azimuth Fourier transform to (3), which is given by:
S f ( K x , x , r ) = s t ( X , x , r ) exp ( j K x X ) d X = exp { j K r c [ R n ( X , x , r ) + Δ r ε ( X , x , r ) ] j K x X } d X
where K x denotes the azimuth wavenumber spectrum, with K a / 2 K x K a / 2 and K a denotes the spectrum range.
For simplification of the derivation, the principle of the stationary phase (POSP) [3] is used to approximately solve the integration operation in Equation (4). During calculation of the stationary phase point in (4), it is noted that because the ideal relationship between Doppler frequency and instantaneous radar slight angle is corrupted by the residual azimuth-variant motion errors, the azimuth wavenumber spectrum is also distorted. In order to acquire the precise time–frequency relationship, residual azimuth-variant motion errors need to be taken into consideration. Therefore, precise stationary phase point X * would be solved by the equation as follows [15]:
( R n + Δ r ε ) X + K x K r c = 0
For the propose of deducing an explicit expression of X * , R n in (2) is expanded into fourth order polynomial shown as follows:
R n ( X , x , r ) r x 0 r ( X x ) + cos 2 φ 2 r ( X x ) 2 + cos 2 φ sin φ 2 r 2 ( X x ) 3 cos 2 φ ( 1 4 sin 2 φ ) 8 r 3 ( X x ) 4
Due to the fact that low-order components usually dominate the residual motion error function, the azimuth-variant motion error Δ r ε is expressed by the Taylor expansion as follows:
Δ r ε ( X , x ) a 0 + a 1 ( X x ) + a 2 ( X x ) 2 + a 3 ( X x ) 3 + a 4 ( X x ) 4
where a 0 a 4 represent the polynomial fitting parameters. It is important to mention that Δ r ε is related to range r , and that a 0 a 4 are also range dependent. We omit range variable r here in order to simplify the expression. According to previous research in [15], the method of series reversion (MSR) [23] can be utilized to obtain a precise expression of the stationary phase point X * , which is given by:
X * = p 1 y + p 2 y 2 + p 3 y 3 + x
where
p 1 = cos 2 φ r + 2 a 2
p 2 = 3 cos 2 φ sin φ 2 r 2 + 3 a 3
p 3 = cos 2 φ ( 1 4 sin 2 φ ) 2 r 3 + 4 a 4
y = K x K r c a 1 + x 0 r
The stationary phase point X * takes azimuth-variant motion errors into consideration, and thus, a precise expression in the azimuth wavenumber domain is obtained as follows:
S f ( K x , x , r ) = exp { j K r c [ R n ( X * , x , r ) + Δ r ε ( X * ) ] j K x X * }
According to the analyses shown above, one can note that the azimuth-shift invariance of azimuth wavenumber spectrum is destroyed in the presence of the residual azimuth-variant phase error. The conventional azimuth-matched filtering function would fail to focus the image completely in this case. As developed in [13], PTA, which compensates for the residual azimuth-variant phase error using a post-filtering strategy, is an effective approach to deal with the problem. Its main focusing flow is commonly divided into two stages, one is coarse focusing stage processed by conventional azimuth matched filtering, while the other is image refocusing stage achieved by sliding window compensation. However, in order to ensure the effectiveness of the post-filtering strategy, it needs to make a careful selection of the window length and overlapping range. In order to illustrate these constraints, explanatory drawings are shown in Figure 2. Figure 2a gives the refocusing condition of window length with respect to a single point. Points A, B and C are three defocusing points for the residual motion errors, the energy diverging width is denoted by W E , and L W denotes the window length. It is clearly shown that Point A is able to be fully refocused because its spreading energy with blur is entirely contained within the block. However, it is not effective for points B and C which have truncated parts within neighboring blocks. Besides residual blur, the truncation would also emerge as ghosts in the image. Based on this analysis, we give the sub-image length restrain for PTA as follows:
L W W E
For the propose of analyzing the relationship between window length and the overlapping range, Figure 2b also illustrates a case of the failure of sliding parameters for point array, L S denotes the sliding range and L O = L W L S denotes the overlapped length. In this case, the overlapped length between the adjacent blocks is not long enough, so that points A and C both satisfy the refocusing condition, while point B, which is not fully contained in neither of the adjacent windows, and the refocused result of point B would be broken. In order to address this issue, the size of a block should be extended accordingly, which gives a successful case of sliding parameters for the point array in Figure 2c. In Figure 2c, the overlapped range between the adjacent blocks increases, indicating that point B could be focused successfully. In general, the relationship between block size and overlapping length is regulated as follows:
L S L W W E
It is important to mention that Equations (14) and (15) represent the relation of block size and overlapping parameters in a limited condition. A significant problem arises in the case of strong motion errors, such as SAR imaging under UAV platforms or serious atmospheric situations. The energy of a scatter in azimuth direction would be seriously diverged. In these cases, PTA would evidently expose its shortcoming, so the size of sub-block in PTA post-filtering should be extended dramatically, while the overlapping range between the neighboring blocks needs to be raised in order to satisfy the refocusing condition, which causes serious computational loads. Therefore, PTA has to make a balance between imaging quality reduction and inevitable increase in calculations. The motivation behind the current study is the desperate need for an imaging method with both high precision and high efficiency for practical applications with strong motion errors.

3. Frequency Domain Fast Back-Projection Algorithm (FDFBPA)

As is illustrated in Section 2, PTA removes the residual azimuth-variant motion errors by the sliding window post-filtering strategy. However, the data segmentation in the image domain seems not to be a wise choice when the residual motion error is severe. Because image domain blocking has to face the dilemma of balancing imaging quality and efficiency, this strategy is inapplicable for practical situations. In this section, we aim at providing a method for solving this problem in theory.
A. Precise Frequency Domain Back-Projection Algorithm (FDBPA)
In order to ensure compensation precision, a point-to-point strategy is preferred rather than the post-filtering block-to-block compensation method. FDBPA is more robust in dealing with strong motion errors, and is thought of as a precise point-to-point imaging method. This method is based on the precise wavenumber spectrum expression deduced in Equation (14), while each point in the imaging grid would be well-focused by adapting a precise back-projection integral. We briefly introduced the principle of FDBPA at the beginning of this section. Before the imaging process, a full resolution imaging grid is used for the FDBPA imaging process. We pick out one point P with coordinate ( x p , r ) , and to P, the echo in azimuth wavenumber spectrum is expressed as S f ( K x , x p , r ) which is shown in (13); so the precise AMF function phase Φ m ( K x , x p , r ) is calculated by:
Φ m ( K x , x p , r ) = K r c [ R n ( X * , x p , r ) + Δ r ε ( X * ) ] + K x X *
Coherent accumulation of point P is realized by the back-projection integration in the wavenumber domain as follows:
S ( x p , r ) = Δ K x / 2 Δ K x / 2 S f ( K x , x p , r ) exp [ j Φ m ( K x , x p , r ) ] d K x
where, Δ K x denotes the azimuth wavenumber spectrum width. It needs to calculate the corresponding AMF for a point-to-point back-projection accumulation for the other points. This FDFBPA process avoids image domain post-filtering, and provides high robustness even with serious motion errors. FDBPA achieves coherent accumulation in a point-by-point manner, and thus, its efficiency is low. In terms of this issue, several studies [24,25] have discussed how to increase the computational efficiency of the back-projection integral without focal quality loss.
B. Acceleration to FDBPA Process
According to our previous work in [18], the back-projection integral can be accelerated by the sub-aperture coarse imaging and coherent spectrum combination, and we find that the sub-aperture processing strategy is also applicable for acceleration. In this subsection, we investigate a further acceleration to the precise FDBPA process, and it is named FDFBPA. Below , we introduce the theory of acceleration method.
After RCMC processing, two-step MOCO and range compression to the raw data, the signal expression is shown in Equation (3), and after azimuth Fourier transform, the signal is then transformed to azimuth wavenumber domain shown in Equation (4). The procedure above is similar to FDBPA. The difference is that we can then uniformly partition the signal in the azimuth wavenumber domain based on the theory of sub-aperture strategy. In each sub-aperture procedure, a uniform coarse resolution imaging grid is constructed, which has the coordinate ( x s u b , r ) with x a / 2 x s u b x a / 2 , where x a denotes the scene range in azimuth. Assuming the total sub-aperture number is U, as for the uth sub-aperture, the azimuth wavenumber spectrum center is K x u , and wavenumber spectrum length is Δ K x a . For a target P at ( x p , r ) , the sub-aperture coherent accumulation operation is given by [18]:
S u ( x s u b , r ) = K x u Δ K x a / 2 K x u + Δ K x a / 2 S f [ X * ( x p ) , x p , r ] exp { j Φ m [ X * ( x s u b ) , x s u b , r ] } d K x K x u Δ K x a / 2 K x u + Δ K x a / 2 exp { j K x [ X * ( x s u b ) X * ( x p ) ] } d K x K x u Δ K x a / 2 K x u + Δ K x a / 2 exp [ j K x ( x s u b x p ) ] d K x
where, S u ( x s u b , r ) denotes the coarse imaging result of the uth sub-aperture. It is clear in Equation (18) that one needs to calculate the coarse resolution image point-by-point with the sub-aperture back-projection integral. Sub-aperture processing reduces the computational burden at some level. Observing the AMF phase expression in Equation (16), Φ m ( K x , x p , r ) is a function with respect to the azimuth wavenumber K x . In each sub-aperture integral, since the wavenumber spectrum length Δ K x a is short, Φ m ( K x , x p , r ) can be approximated into a linear function with respect to K x . The approximation expression of Equation (16) is given by:
Φ m ( K x , x p , r ) a u ( x p ) ( K x K x u ) + Φ m ( K x u , x p , r )
where notation u still denotes the uth sub-aperture and a u denotes the slope of AMF phase of the uth sub-aperture. The sub-aperture back-projection integration in Equation (18) is transformed as:
S u ( x s u b , r ) = K x u Δ K x a / 2 K x u + Δ K x a / 2 S f ( K x , x p , r ) exp [ j a u ( x s u b , r ) K x ] exp [ j Φ m ( K x u , x s u b , r ) ] d K x
One can note that AMF phase slope a u is slowly varying along azimuth coordinate point x s u b , because of the presence of azimuth-variant motion error. Therefore, a u is able to be linearly approximated with respect to the variable x s u b , and Equation (20) can be written as follows:
S u ( x s u b , r ) = S u l ( x s u b , r ) exp [ j Φ m ( K x u , x s u b , r ) ]
where, S u l ( x s u b , r ) is the sub-aperture back-projection integral formula given by:
S u l ( x s u b , r ) = K x u Δ K x a / 2 K x u + Δ K x a / 2 S f ( K x , x p , r ) exp [ j b u ( r ) x s u b K x ] exp [ j b u 0 ( r ) K x ] d K x
where, b u is the monomial coefficient of a u with respect to x s u b , and b u 0 is a constant term. It can be found that the integral formula in Equation (22) is regarded as a scaling Fourier transform with scale a factor of b u and initial phase of b u 0 , so the integral operation in Equation (22) can be substituted by the chirp-Z transform, which can enhance the computation efficiency in implementation of the sub-aperture back-projection integral. The scale factor can be straightforwardly obtained by twice differential operations on Φ m ( K x , x s u b , r ) .
b u ( r ) = x s u b [ K x Φ m ( K x , x s u b , r ) ]
where operational symbol denotes the differential operator. Then, chirp-Z transform can be introduced in each sub-aperture integral to obtain a set of coarse resolution images. Residual azimuth-variant motion error for each point coordinate is completely compensated for in these sub-aperture back-projection integrals. Substituting Equations (22) into (21), a series of sub-aperture coarse resolution images are obtained.
The next question is how to combine these coarse resolution images into a full resolution image. There is an effective method fusing these sub-images in the azimuth wavenumber domain. As to the uth sub-image S u ( x s u b , r ) , transforming the sub-aperture images back to the azimuth wavenumber domain.
S u f ( K x , r ) = x a / 2 x a / 2 S u ( x s u b , r ) exp ( j K x x s u b ) d x s u b
Substitute the expression in (18) into (24), the azimuth wavenumber spectrum of sub-image is expressed as:
S u f ( K x , r ) = rect ( K x K x u Δ K x a ) exp ( j K x x p )
It is shown in Equation (25) that the center of azimuth wavenumber spectrum is K x u . One can obtain the full spectrum by sequentially stitching the sub-aperture azimuth wavenumber spectrum. Therefore, with the full wavenumber spectrum, the fine resolution image is obtained by an inverse Fourier transform to the full azimuth wavenumber spectrum.
S ( X , r ) = Δ K x / 2 Δ K x / 2 u = 1 U S u f ( K x , r ) exp ( j K x X ) d K x
In terms of clarify, the FDFBPA procedure is represented in Figure 3. It is clear that FDFBPA is designed to remove the residual azimuth-variant motion error directly from the azimuth wavenumber sub-aperture back-projection integral strategy, and the integral operations in the coarse resolution imaging process are substituted by a series of CZT operations yielding both enhanced robustness and efficiency.

4. Algorithm Description and Analysis

A. Algorithm Flow Description
According to the theoretical analysis in Section 3, we develop a complete SAR imaging flowchart with FDFBPA given in Figure 4. The whole procedure is divided into two main stages, regular processing stage and FDFBPA imaging stage. Some key steps are described as follows:
(a)
Range compression. This step is achieved by range-matched filtering. After range compression, one-dimensional imaging is completed.
(b)
Coarse MOCO. This step is achieved by two-step MOCO. MOCO I is bulk compensation which compensates the range- and azimuth-invariant motion errors, and MOCO II compensates the residual range-variant motion error. The residual azimuth-variant motion errors would be significant enough to induce distinctive azimuth blurring. According to the principle of two-step MOCO, MOCO I is processed before RCMC and MOCO II after RCMC. Two-step MOCO can also be replaced by some improved MOCO strategies to obtain a better performance in range cell migration correction [26,27].
(c)
RCMC. This step is the core of regular processing stage, which is used to correct the range curve in the data. The most commonly applied RCMC schemes include range-Doppler algorithm (RDA), chirp scaling algorithm (CSA) and Omega-k algorithm.
(d)
Azimuth blocking in wavenumber domain. After Range compression, RCMC and two-step MOCO, we obtain the azimuth wavenumber domain data by applying an azimuth fast Fourier transform (FFT). The proposed azimuth wavenumber sub-aperture processing strategy can then be introduced. The data need to be partitioned uniformly in the azimuth wavenumber domain, so that we can process the data in azimuth sub-block for the next procedure.
(e)
Precise azimuth matched filtering function calculation. In this step, we first build a coarse imaging grid, and calculate a series of precise wavenumber spectrum relative to each point in coarse imaging grid by (13), where the coefficients a 0 a 4 are obtained by fourth order polynomial fitting. Then the precise AMF function is the conjugation of calculated precise wavenumber spectrum and the precise AMF function phase expression as shown in Equation (10).
(f)
Scaling factor calculation and azimuth CZT. This step is the core of sub-aperture processing by which we achieve the coarse resolution imaging in this step. Based on the AMF function phase expression, we calculate a scaling factor by Equation (23), then CZT is performed to get a coarse resolution image in each azimuth sub-block.
(g)
Spectrum stitching. This step aims at fusing the coarse resolution images into full resolution, which is achieved by azimuth wavenumber spectrum stitching. Specifically, the coarse resolution images are transformed into the wavenumber domain by the azimuth FFT in Equation (24), then a full-resolution and well-focused image is obtained by Equation (26).
B. Computation Analysis
In this section, we consider the calculation burden of the FDFBPA imaging stage, especially the times necessary for the FFT and inverse Fourier transform (IFFT) operations. As shown in Figure 4, with respect to data, the whole azimuth point number is N . We define the coarse resolution imaging grid point number as N a , so the compensation step is N / N a . For analysis convenience, the azimuth sub-aperture length is also defined as N a , so the number of azimuth sub-aperture is N / N a . According to the FDFBPA imaging stage of Figure 4, three steps including CTZ, FFT and IFFT operations need to be counted. There are N / N a times N a -point CZT, N / N a times N a -point FFT and one N -point IFFT, where one N a -point CZT operation includes two N a -point FFTs and one N a -point IFFT. It might be beneficial to account for the computational burden by calculating floating-point operations (FLOPs). One N a -point FFT/IFFT operation contains 5 N a log 2 ( N a ) FLOPs, so the total FLOP times is given by:
C = N / N a ( 2 + 1 ) 5 N a log 2 ( N a ) + N / N a 5 N a log 2 ( N a ) + 5 N log 2 ( N ) = 20 N N a log 2 ( N a ) + 5 N log 2 ( N )
C. Constraint Condition Analysis
According to the computation analysis above, it is clear that the calculation burden will be reduced with the increase of azimuth sub-aperture length N a , but N a cannot constantly increase to pursue high computation efficiency. As illustrated in Section 3, the core of FDFBPA is based on a linearly approximation of AMF phase Φ m in sub-aperture, so azimuth sub-aperture length N a is constrained by the condition of this approximation. In most cases, the phase error is considered less than π / 16 to ensure algorithm stability, so the restriction of N a is given by:
max [ | Φ m ( K x u + N a 2 N K a ) Φ m ( K x u ) a u ( N a 2 N K a ) | , | Φ m ( K x u N a 2 N K a ) Φ m ( K x u ) a u ( N a 2 N K a ) | ] π 16
where K x u = n N a N K a and n is an integer with a range of n ( N 2 N a + 1 , , N 2 N a 1 ) .

5. Simulated and Real Data Experiments

A. Experiments with Simulated Data
In order to validate the theory and analysis illustrated in the previous sections, we describe an experiment performed with simulated Ka-band SAR data in this subsection. The main SAR system parameters are shown in Table 1. In this experiment, two points are simulated in different squint angles with trajectory deviations, which will cause azimuth-variant motion error significant enough to blur the azimuth impulse response curve. The trajectory deviations are extracted from real-measured INS data shown in Figure 5. The data contain 8192 pulses in azimuth direction. For the propose of comparing the compensation performance, FDFBPA, PTA and two-step MOCO approaches are implemented to focus the points. Due to the fact that the azimuth-variance of motion error in azimuth direction is serious, the sub-aperture length is set as 16 points with 25% overlap. The azimuth impulse response curve comparisons of FDFBPA, PTA and two-step MOCO are shown in Figure 6, where Figure 6a is at 0 degrees of squint angle, and Figure 6b is at 40 degrees. It is shown that azimuth-variant MOCO may cause serious defocusing in azimuth direction, so two-step MOCO is blurred in the figures. PTA also fails to refocus the points because the shortcomings of the post-filtering strategy; data blocking in the image domain seriously disrupted the focusing performance. FDFBPA is able to refocus the points well for comparison. In order to quantitatively evaluate the focused improvement of the proposed algorithm compared with the other algorithms, three quantitative metrics are introduced to measure the point impulse responses, which are peak sidelobe ratio (PSLR), integrated sidelobe ratio (ISLR) and impulse response width (IRW). The statistical results are shown in Table 2. We demonstrate that the PSLR and ISLR of PTA are both large, because necessary extension is absent in the block length and overlapping part. Furthermore, the inevitable split of energy in the image domain would cause emergence of multiple peaks. In contrast, FDFBPA applies a blocking strategy in the frequency domain so it performs in a robust manner in the face of strong motion errors.
B. Experiments with Real-Measured Data
In this subsection, two sets of comparison experiments are provided based on the processing of measured data recorded by an experimental airborne Ka-band SAR system. The first experiment aims at broad side operating modes. Some main SAR system parameters are shown in Table 1, where the squint angle is less than two degrees with a resolution of 0.15 m in both range and azimuth. The instantaneous position and motion parameters of the platform are measured by a high-accuracy INS equipped on the platform, the azimuth-variant motion error during the data collection is severe enough to cause defocusing of the image. A set of imaging results processed by the FDFBPA, 25% overlapped PTA and two-step MOCO are shown in Figure 7. In these images, two typical areas with obvious artificial structures are marked by yellow rectangles, which are amplified in Figure 8a,b for comparison. The sub-block length for processing is 16 points. It is clear that there are ghost shadows appearing in PTA results for the reason that the sub-aperture length is too short to meet the refocus conditions in Equations (14) and (15). Furthermore, the defocusing of two-step MOCO results are also significant because the residual azimuth-variant motion error remains.
In order to check the azimuth point impulse response improvement of the FDFBPA algorithm, two isolated point-like targets named point A and point B are extracted from Figure 7 by yellow circles for azimuth point impulse response function comparison, and are shown in Figure 9a,b respectively. Sidelobes of PTA and two-step MOCO are obviously higher than FDFBPA, the presence of high sidelobes causes image ghosting and raises the floor noise of the SAR image. The quantitative analysis results of the azimuth point spreading response functions of Figure 10 are listed in Table 3. It is evident that serious distortion and smearing occur in both PTA and two-step MOCO results, while only the FDFBPA results provide a well-focused performance. From the comparison of the local scene images and point target impulse responses, one can conclude that FDFBPA achieves significant improvements in focusing data with strong motion errors. With the point-to-point correction precision of azimuth-variant motion error phase terms, FDFBPA performs a more robust manner than PTA with the image domain post-filtering strategy.
In the second experimental set, the radar is working at a high squint angle of about 40 degrees with a resolution of 0.15 m in both range and azimuth; some main system parameters are shown in Table 1. The range and azimuth coupling is firstly removed by range walk correction (RWC) processing, and we can regard the processed data as obtained in broad side mode. The precise azimuth wavenumber spectrum needs to be recalculated considering the influence of RWC, which has been previously worked out in [28], and will not be discussed here. According to the precise azimuth wavenumber spectrum, a set of imaging results processed by the FDFBPA, 25% overlapped PTA and two-step MOCO are shown in Figure 10, with sub-block length of 16 points. Two typical areas marked by yellow rectangles are amplified for comparison and shown in Figure 11a,b. Furthermore, two isolated point-like targets named points A and point B are extracted from Figure 10 by yellow circles for azimuth point impulse response function comparison, which are shown in Figure 12a,b respectively. Statistical indicators like PSLR, ISLR and IRW are used to numerically measure the impulse response function performance (Table 4). Two-step MOCO can effectively remove the range-variant motion errors, but cannot correct the azimuth-variant phase terms, which is more significant in highly squinted imaging modes. Therefore, the two-step result is severely blurred with error phases. PTA seems to partly compensate for the defocusing by image domain post-filtering, however, the sub-block length and overlapped part are not long enough to meet the refocusing condition. As a result, the PTA results suffer from ghost shadows that would decrease imaging performance. FDFBPA performs the SAR imaging process by a point-to-point strategy in the sub-aperture focusing stage, which provides overwhelming precision and robust improvements in correcting strong motion errors.
For the propose of testing the calculation improvement of FDFBPA compared with PTA, we record the calculation time for the two algorithms under different sliding step factors. The sliding step here means the interval between centers of adjacent sub-blocks for PTA, and also means the interval of coarse resolution grid. According to the analysis in Section 3, we know that with the lengthening of sliding step, block number of FDFBPA is increasing while the block number of PTA is decreasing correspondingly. In order to get the experiment closer to the actual situation, sub-block length of PTA is 64, and the overlapped part depending on the sliding step range block length is 16 points without overlap. The computer platform is installed with Windows 7 64-bitoperating system, [email protected] CPU, 32-GBmemory and Matlab with version of R2015a. A block of 3072 × 16384 (range × azimuth) points SAR data is used for test. The computation time comparison results are shown in Table 5. It is shown that, with the help of CZT operation and without sub-aperture overlapping, FDFBPA processes a much higher operation efficiency compared with PTA under short sliding steps. It is worth to explaining that the computation time of PTA decreases with the increase of sliding steps for the reduction of sub-block number. In contrast, the computation time of FDFBPA is slowly growing with the increase of sliding step due to the fact that we approximately use CZT for sub-aperture fast imaging, and the computation time mainly depends on the sub-aperture block number. However, we cannot infinitely shorten the interval of coarse resolution grid for FDFBPA to pursue a higher computational speed, because the sub-aperture length would be correspondingly extended so that the phase linear approximation condition in Equation (28) would be destructed.

6. Conclusions

Focusing on the precise azimuth-variant MOCO for airborne SAR, a frequency domain fast back-projection algorithm named FDFBPA is proposed in this paper to deal with strong azimuth-variant motion errors. Based on the analysis of image domain post-filtering strategy such as PTA, it is known that PTA has to make a balance between reduction in imaging quality and increased calculations. FDFBPA is designed with both high precision and high efficiency for imaging with strong motion errors. FDFBPA disposes of the azimuth-variant motion errors by precise azimuth wavenumber spectrum calculation. Moreover, with the utilization of the wavenumber domain sub-aperture processing strategy and CZT operation, the efficiency of the algorithm is further improved. Simulated and real-measured data experiments show that the proposed FDFBPA is more robust for imaging with strong motion errors compared with PTA, and the efficiency of FDFBPA for processing real measured data is also verified.

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61303031, Grant 61771372 and Grant 61771367 and in part by the National Outstanding Youth Science Fund Project under Grant 61525105.

Author Contributions

M.Z. and L.Z. conceived and designed the experiments; G.W. performed the experiments; G.W. and L.Z. analyzed the data; L.Z. contributed experiment data and platform; M.Z. wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carrara, W.; Goodman, R.; Majewski, R. Spotlight Synthetic Aperture Radar: Signal Processing Algorithm; Artech House: Boston, MA, USA, 1995. [Google Scholar]
  2. Soumekh, M. Synthetic Aperture Radar Signal Processing with MATLAB Algorithms; Wiley: New York, NY, USA, 1999. [Google Scholar]
  3. Cumming, I.; Wong, F. Digital Processing of Synthetic Aperture Radar Data: Algorithms and Implementation; Artech House: Norwood, MA, USA, 2005. [Google Scholar]
  4. González-Partida, J.; Almorox-González, P.; Burgos-García, M.; Dorta-Naranjo, B. SAR system for UAV operation with motion error compensation beyond the resolution cell. Sensors 2008, 8, 3384–3405. [Google Scholar] [CrossRef] [PubMed]
  5. Reigber, A.; Alivizatos, E.; Potsis, A.; Moreira, A. Extended wavenumber-domain synthetic aperture radar focusing with integrated motion compensation. IET Radar Sonar Navig. 2006, 153, 301–310. [Google Scholar] [CrossRef]
  6. Moreira, A.; Yonghong, H. Airborne SAR processing of highly squinted data using a chirp scaling approach with integrated motion compensation. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1029–1040. [Google Scholar] [CrossRef]
  7. Fornaro, G.; Franceschetti, G.; Perna, S. On center-beam approximation in SAR motion compensation. IEEE Geosci. Remote Sens. Lett. 2006, 3, 276–280. [Google Scholar] [CrossRef]
  8. Moreira, A.; Mittermayer, J.; Scheiber, R. Extended chirp scaling algorithm for air- and spaceborne SAR data processing in stripmap and scanSAR imaging modes. IEEE Trans. Geosci. Remote Sens. 1996, 34, 1123–1136. [Google Scholar] [CrossRef]
  9. Potsis, A.; Reigber, A.; Mittermayer, J.; Moreira, A.; Uzunoglou, N. Sub-aperture algorithm for motion compensation improvement in wide-beam SAR data processing. Electron. Lett. 2001, 37, 1405–1407. [Google Scholar] [CrossRef]
  10. Prats, P.; Reigber, A.; Mallorqui, J. Topography-dependent motion compensation for repeat-pass interferometric SAR systems. IEEE Geosci. Remote Sens. Lett. 2005, 2, 206–210. [Google Scholar] [CrossRef]
  11. Prats, P.; Karlus, A.; Macedo, C.; Reigber, A.; Scheiber, R.; Mallorqui, J. Comparison of topography- and aperture-dependent motion compensation algorithms for airborne SAR. IEEE Geosci. Remote Sens. Lett. 2007, 4, 349–353. [Google Scholar] [CrossRef]
  12. Zhang, L.; Wang, G.; Qiao, Z.; Wang, H. Azimuth motion compensation with improved subaperture algorithm for airborne SAR imaging. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 184–193. [Google Scholar] [CrossRef]
  13. Macedo, K.; Scheiber, R. Precise topography- and aperture-dependent motion compensation for airborne SAR. IEEE Geosci. Remote Sens. Lett. 2005, 2, 172–176. [Google Scholar]
  14. Perna, S.; Zamparelli, V.; Pauciullo, A.; Fornaro, G. Azimuth-to-frequency mapping in airborne SAR data corrupted by uncompensated motion errors. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1493–1497. [Google Scholar] [CrossRef]
  15. Wang, G.; Zhang, L.; Li, J.; Hu, Q. Precise aperture-dependent motion compensation for high-resolution synthetic aperture radar imaging. IET Radar Sonar Navig. 2017, 11, 204–211. [Google Scholar] [CrossRef]
  16. Nitti, D.; Bovenga, F.; Chiaradia, M.; Greco, M.; Pinelli, G. Feasibility of using synthetic aperture radar to aid UAV navigation. Sensors 2015, 15, 18334–18359. [Google Scholar] [CrossRef] [PubMed]
  17. Aguasca, A.; Acevo-Herrera, R.; Broquetas, A.; Mallorqui, J.; Fabregas, X. ARBRES: Light-weight CW/FM SAR sensors for small UAVs. Sensors 2013, 13, 3204–3216. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Zhang, L.; Li, H.; Qiao, Z.; Xu, Z. A fast BP algorithm with wavenumber spectrum fusion for high-resolution spotlight SAR imaging. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1460–1464. [Google Scholar] [CrossRef]
  19. Zhang, L.; Li, H.; Qiao, Z.; Xing, M.; Bao, Z. Integrating autofocus techniques with fast factorized back-projection for high-resolution spotlight SAR imaging. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1394–1397. [Google Scholar] [CrossRef]
  20. Lanari, R. A new method for the compensation of the SAR range cell migration based on the chirp Z-Transform. IEEE Trans. Geosci. Remote Sens. 1995, 33, 1296–1299. [Google Scholar] [CrossRef]
  21. Tang, Y.; Xing, M.; Bao, Z. The polar format imaging algorithm based on double Chirp-Z transforms. IEEE Geosci. Remote Sens. Lett. 2008, 5, 610–614. [Google Scholar] [CrossRef]
  22. Zhang, L.; Xing, M.; Qiao, Z. Wavenumber-domain autofocusing for highly squinted UAV SAR imagery. IEEE Sens. J. 2012, 12, 1574–1588. [Google Scholar] [CrossRef]
  23. Neo, Y.; Wong, F.; Cumming, I. A two-dimensional spectrum for bistatic SAR processing using series reversion. IEEE Geosci. Remote Sens. Lett. 2007, 4, 93–96. [Google Scholar] [CrossRef]
  24. Yegulalp, A. Fast backprojection algorithm for synthetic aperture radar. In Proceedings of the Radar Conference, Waltham, MA, USA, 22 April 1999; pp. 60–65. [Google Scholar]
  25. Ulander, L.; Hellsten, H.; Stenstrom, G. Synthetic aperture radar processing using fast factorized back-projection. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 760–776. [Google Scholar] [CrossRef]
  26. Meng, D.; Hu, D.; Ding, C. Precise focusing of airborne SAR data with wide aperture large trajectory deviations: A chirp modulation back-projection approach. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2510–2519. [Google Scholar] [CrossRef]
  27. Yang, M.; Zhu, D.; Song, W. Comparison of two-step and one-step motion compensation algorithms for airborne synthetic aperture radar. Electron. Lett. 2015, 51, 1108–1110. [Google Scholar] [CrossRef]
  28. Zhang, L.; Wang, G.; Qiao, Z.; Wang, H.; Sun, L. Two-Stage Focusing Algorithm for Highly Squinted Synthetic Aperture Radar Imaging. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5547–5562. [Google Scholar] [CrossRef]
Figure 1. Real SAR imaging geometry.
Figure 1. Real SAR imaging geometry.
Sensors 17 02454 g001
Figure 2. Illustration of sliding parameters for the post-filtering strategy: (a) Window length restriction for single point; (b) A failure case of sliding parameters for point array; (c) A successful case of sliding parameters for point array.
Figure 2. Illustration of sliding parameters for the post-filtering strategy: (a) Window length restriction for single point; (b) A failure case of sliding parameters for point array; (c) A successful case of sliding parameters for point array.
Sensors 17 02454 g002
Figure 3. Schematic diagram of frequency domain back-projection algorithm (FDFBPA).
Figure 3. Schematic diagram of frequency domain back-projection algorithm (FDFBPA).
Sensors 17 02454 g003
Figure 4. Flowchart of the measured data processing based on FDFBPA.
Figure 4. Flowchart of the measured data processing based on FDFBPA.
Sensors 17 02454 g004
Figure 5. Trajectory deviations for simulation.
Figure 5. Trajectory deviations for simulation.
Sensors 17 02454 g005
Figure 6. Azimuth impulse response curve comparison of FDFBPA, 25% overlapped PTA and Two-step MOCO: (a) 0 degree of squint angle; (b) 40 degrees of squint angle.
Figure 6. Azimuth impulse response curve comparison of FDFBPA, 25% overlapped PTA and Two-step MOCO: (a) 0 degree of squint angle; (b) 40 degrees of squint angle.
Sensors 17 02454 g006
Figure 7. Imaging result processed with different algorithms: (a) FDFBPA; (b) PTA; (c) Two-step MOCO.
Figure 7. Imaging result processed with different algorithms: (a) FDFBPA; (b) PTA; (c) Two-step MOCO.
Sensors 17 02454 g007
Figure 8. Comparison of three algorithms in Scene 1 and Scene 2: (a) Comparison of Scene 1 (left to right: FDFBPA, PTA and Two-step MOCO); (b) Comparison of Scene 2 (left to right: FDFBPA, PTA and Two-step MOCO).
Figure 8. Comparison of three algorithms in Scene 1 and Scene 2: (a) Comparison of Scene 1 (left to right: FDFBPA, PTA and Two-step MOCO); (b) Comparison of Scene 2 (left to right: FDFBPA, PTA and Two-step MOCO).
Sensors 17 02454 g008
Figure 9. Azimuth pulse response curve comparison of FDFBPA, PTA and Two-step MOCO. (a) Scatter Point A; (b) Scatter Point B.
Figure 9. Azimuth pulse response curve comparison of FDFBPA, PTA and Two-step MOCO. (a) Scatter Point A; (b) Scatter Point B.
Sensors 17 02454 g009
Figure 10. Imaging result processed with different algorithms (a) FDFBPA; (b) PTA (c) Two-step MOCO.
Figure 10. Imaging result processed with different algorithms (a) FDFBPA; (b) PTA (c) Two-step MOCO.
Sensors 17 02454 g010
Figure 11. Comparison of four algorithms in Scene 1 and Scene 2: (a) Comparison of Scene 1 (left to right: FDFBPA, PTA and Two-step MOCO); (b) Comparison of Scene 2 (left to right: FDFBPA, PTA and Two-step MOCO).
Figure 11. Comparison of four algorithms in Scene 1 and Scene 2: (a) Comparison of Scene 1 (left to right: FDFBPA, PTA and Two-step MOCO); (b) Comparison of Scene 2 (left to right: FDFBPA, PTA and Two-step MOCO).
Sensors 17 02454 g011
Figure 12. Azimuth pulse response curve comparison of FDFBPA, PTA and Two-step MOCO: (a) Scatter Point A; (b) Scatter Point B.
Figure 12. Azimuth pulse response curve comparison of FDFBPA, PTA and Two-step MOCO: (a) Scatter Point A; (b) Scatter Point B.
Sensors 17 02454 g012
Table 1. SAR System Parameters.
Table 1. SAR System Parameters.
ParameterValue
Carrier frequency35 GHz
Bandwidth900 MHz
Center slant range5000 m
Coherent Processing Interval1.5 s
Ideal velocity70 m/s
Pulse repetition frequency5000 Hz
Table 2. Comparison of Quantification Statistics Results.
Table 2. Comparison of Quantification Statistics Results.
Approach0 Degree of Squint Angle40 Degrees of Squint Angle
PSLR (dB)ISLR (dB)IRW (m)PSLR (dB)ISLR (dB)IRW (m)
FDFBPA−12.4113−10.52140.1313−12.4936−10.57190.1312
PTA−4.4790−0.85120.1451−2.3360−2.79380.1489
Two−step−0.4803−0.93810.2199−1.8899−3.39400.2000
Table 3. Comparison of Quantification Statistics Results.
Table 3. Comparison of Quantification Statistics Results.
ApproachPoint APoint B
PSLR (dB)ISLR (dB)IRW (m)PSLR (dB)ISLR (dB)IRW (m)
FDFBPA−10.9869−8.59420.1968−9.3550−6.16510.2531
PTA−8.5014−5.68290.1968−5.1406−2.90500.2531
Two−step−4.8080−4.30060.3937−4.1109−2.84960.2812
Table 4. Comparison of Quantification Statistics Results.
Table 4. Comparison of Quantification Statistics Results.
ApproachPoint APoint B
PSLR (dB)ISLR (dB)IRW (m)PSLR (dB)ISLR (dB)IRW (m)
FDFBPA−9.9872−7.41240.2152−7.8962−7.18130.2583
PTA−0.54403.97240.2439−4.0268−0.94750.2583
Two−step−3.0850−2.35330.2870−2.7590−3.95290.3214
Table 5. Computation time comparison of FDFBPA and PTA under different sliding step.
Table 5. Computation time comparison of FDFBPA and PTA under different sliding step.
ApproachSliding Step (Points)
81632
FDFBPA159.58 s192.75 s243.67 s
PTA1427.24 s725.41 s423.05 s

Share and Cite

MDPI and ACS Style

Zhang, M.; Wang, G.; Zhang, L. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm. Sensors 2017, 17, 2454. https://doi.org/10.3390/s17112454

AMA Style

Zhang M, Wang G, Zhang L. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm. Sensors. 2017; 17(11):2454. https://doi.org/10.3390/s17112454

Chicago/Turabian Style

Zhang, Man, Guanyong Wang, and Lei Zhang. 2017. "Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm" Sensors 17, no. 11: 2454. https://doi.org/10.3390/s17112454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop