Next Article in Journal
Imaging the Vocal Folds: A Feasibility Study on Strain Imaging and Elastography of Porcine Vocal Folds
Previous Article in Journal
A Metasurfaces Review: Definitions and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modulus Stretch-Based Circular SAR Imaging with Contour Thinning

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
The Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu 610054, China
3
School of Information Engineering, Soutwest University of Science and Technology, Mianyang 621010, China
4
School of Science, Soutwest University of Science and Technology, Mianyang 621010, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(13), 2728; https://doi.org/10.3390/app9132728
Submission received: 12 June 2019 / Revised: 30 June 2019 / Accepted: 2 July 2019 / Published: 5 July 2019
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
This paper presents a modulus stretch-based circular Synthetic Aperture Radar (SAR) imaging method. This method improves the traditional backprojection algorithm for circular SAR imaging, and introduces the modulus stretch transformation function in the imaging process. By performing a modulus stretch transformation on the intermediate results, the target contour in the final imaging result is thinner and clearer. A thinner and clearer contour can help to increase the recognizability of the target and provide a basis for subsequent target recognition. The proposed method is demonstrated on the line target imaging simulations and Gothca dataset.

1. Introduction

Synthetic Aperture Radar (SAR) is a type of Radar capable of performing high-resolution microwave imaging on target areas under all-weather and all-day conditions. The research on SAR includes imaging, target detection, time-frequency analysis and other aspects, among which imaging is one of the more studied fields [1,2,3,4]. The development of SAR always aims at improving image resolution. Wide-angle SAR (WSAR) refers to SAR which spans a wide range of azimuth during data acquisition to obtain higher azimuth resolution and more azimuth scattering information. Circular SAR (CSAR) is a special case of wide-angle SAR. CSAR moves 360° in a circle with the radar platform centering on the observation scene, and the beam always shines on the same ground scene to achieve all-round observation.
The imaging mode of circular SAR was first proposed by Soumekh in 1996. Soumekh also presented the time-domain model of CSAR echo signal and CSAR imaging algorithm based on wavefront reconstruction [5]. Subsequently, many researchers have studied the imaging theory of circular SAR. The Air Force Research Laboratory (AFRL) and other organizations have conducted several experimental tests of WSAR and CSAR and released several test datasets and challenge problems about WSAR and CSAR [6,7,8,9,10,11,12,13]. Zhang et al. presented an approximation method using linear spotlight SAR to simulate CSAR [14,15]. Lin, Hong and Teng et al. have done a series of research on the three-dimensional imaging, imaging accuracy and imaging algorithm of CSAR [16,17,18,19,20]. Ponce et al. used an Experimental airborne SAR (E-SAR) system to conduct L-band experiments with circular SAR systems and adopted a fast factorized backprojection algorithm for imaging [21]. Liao et al. presented a modified Omega-K algorithm to focus accurately. The accuracy can be controlled by keeping enough terms in the two series expansions so that a well-focused image can be achieved with a proper range approximation [22,23]. Li et al. investigated the availability of an expediting backprojection algorithm for CSAR imaging. Without time-consuming 2D interpolation, it is capable of decreasing the loss of spectrum, while significantly improving the computational efficiency compared with direct backprojection [24]. Farhadi et al. proposed a distributed compressed sensing algorithm for circular SAR imaging, which improves the resolution and reduces the side-lobe effect in the full-aperture 3D image, and meanwhile reduces the computation [25]. In recent years, the research on Circular SAR imaging mainly focuses on Deep Learning, Compression Sensing, 3D imaging and other aspects [26,27,28,29,30,31,32,33,34]. Many excellent research results and algorithm theories have emerged.
Starting from the traditional backprojection imaging algorithm, this paper re-examines the imaging algorithm in order to improve the recognizability of the target. The idea of combining imaging and target detection has been studied a lot in the field of infrared target detection [35,36,37,38]. This paper presents an imaging method which is more beneficial to target recognition. The traditional backprojection imaging algorithm has high imaging accuracy and simple implementation, which has been widely used in the radar imaging field. However, in a real scene, a target may have a clear contour through visible light imaging that will become blurred in SAR imaging. The target may have a continuous reflection surface, and there will be a continuous strong signal response in the reflected echo detected by radar. In the imaging process, the original obvious contour will be connected and cannot be distinguished, which is not conducive to classification and recognition. This paper proposes a contour thinning imaging algorithm by modulus stretch. Based on the principle of backprojection imaging, the modulus stretch transform function is introduced to reduce the low modulus area of the intermediate image. The final image after processing has clearer and thinner contour edges than that obtained from the traditional backprojection algorithm. It is more conducive to classification and recognition of targets.

2. Analysis of Imaging Algorithms

2.1. Backprojection Algorithm

According to the principle of SAR detection, the radar transmits a linear continuous frequency modulation wave to the target and receives the echo by matched filter. Radar echo data are functions of receiving frequency f and slow time τ , which is denoted by S ( f , τ ) . Let r = ( x , y , z ) denote the target coordinates in the detection scene. Let Δ R ( r , τ ) denote the difference between the distance from the sensor to the origin and the distance from the sensor to the target with the coordinate r at time τ . The output of the matched filter is the scattering response of the target at coordinate r in the scene, which is denoted by I(r), is given by [15]
I ( r ) = S ( f , τ ) exp ( + j 4 π f Δ R ( r , τ ) c ) d f d τ ,
where c denotes the constant speed of light. Since Equation (1) is calculated point-by-point and the speed is too slow, it is rewritten to facilitate the use of the Inverse Fast Fourier Transform (IFFT) to obtain the backprojection algorithm. Rewrite Equation (1) as a discrete form [11]:
I ( r ) = 1 N p K n = 1 N p k = 1 K S ( f k , τ n ) exp ( + j 4 π f k Δ R ( r , τ n ) c ) ,
where Np denotes the number of sampling points of the sensor on the circular orbit. K denotes the sampling frequency of the echo of the sensor at each circular orbital position. First consider the inner summation, which can be regarded as the IFFT of S ( f k , τ n ) . After transformation, the inner summation can be rewritten as [11]:
s ( m , τ n ) = N f f t FFTshift { IFFT [ S ( f k , τ n ) ] } exp ( + j 2 π f 1 Δ R ( m , τ n ) c N f f t ) ,
where Nfft denotes the length of the IFFT. To put m = 1 at the center of the range profile, the function FFTshift(•) is applied to the output of the IFFT. Since m is not the coordinate value in the real scene, it is necessary to interpolate the real coordinate value in order to obtain the IFFT sequence corresponding to the real coordinate approximately. The interpolated expression is denoted by [11]
s i n t ( r , τ n ) = Interp [ s ( m , τ n ) ] ,
where r represents the true coordinates in the scene. Interpolating all the coordinates in the scene, the overall imaging result of the scene at time τ n is obtained, S i n t ( τ n ) , given by
S i n t ( τ n ) = [ s i n t ( r 1 , 1 , τ n ) s i n t ( r 1 , 2 , τ n ) s i n t ( r 1 , L , τ n ) s i n t ( r 2 , 1 , τ n ) s i n t ( r 2 , 2 , τ n ) s i n t ( r 2 , L , τ n ) s i n t ( r L , 1 , τ n ) s i n t ( r L , 2 , τ n ) s i n t ( r L , L , τ n ) ] ,
where L represents the pixel length of the final image size. Finally, all the data at the slow time are superimposed together to obtain the coherent imaging result of backprojection, represented by Ia, as follows [11]:
I a = n = 1 N p S i n t ( τ n )
Since the echo data is complex, the final Ia is also a complex matrix, so the modulus value is used to represent the grayscale in the image. Let abs(•) represent the modulus of each element in the matrix, and the grayscale matrix of the image is denoted by
I m = abs ( I a ) = [ | I a ( 1 , 1 ) | | I a ( 1 , 2 ) | | I a ( 1 , L ) | | I a ( 2 , 1 ) | | I a ( 2 , 2 ) | | I a ( 2 , L ) | | I a ( L , 1 ) | | I a ( L , 2 ) | | I a ( L , L ) | ] = [ | n = 1 N p s i n t ( r 1 , 1 , τ n ) | | n = 1 N p s i n t ( r 1 , 2 , τ n ) | | n = 1 N p s i n t ( r 1 , L , τ n ) | | n = 1 N p s i n t ( r 2 , 1 , τ n ) | | n = 1 N p s i n t ( r 2 , 2 , τ n ) | | n = 1 N p s i n t ( r 2 , L , τ n ) | | n = 1 N p s i n t ( r L , 1 , τ n ) | | n = 1 N p s i n t ( r L , 2 , τ n ) | | n = 1 N p s i n t ( r L , L , τ n ) | ]

2.2. Contour Thinning Analysis

Considering the s ( m , τ n ) obtained by Equation (3), when τ n is fixed, it is essentially a one-dimensional function determined by the slant range difference Δ R ( m , τ n ) . At a given time τ n , the same Δ R ( m , τ n ) corresponds to the same s ( m , τ n ) , and the same interpolation s i n t ( r , τ n ) is obtained. Therefore, the imaging results of the whole scene are only related to the Δ R ( m , τ n ) . A series of bright and dark stripes perpendicular to the azimuth of the radar are obtained by imaging the modulus of S i n t ( τ n ) . A location at the same distance from the radar in the scene has the same brightness. The angle of the stripes changes with different τ n . The scattering properties of the target also exhibit different echo data as the angle changes, as shown in Figure 1.
The final imaging result of the scene is the superposition of this series of striped images. It can be intuitively concluded that the wider the sub-image stripes at each τ n time, the larger the area of the final imaging target after the superposition. If the imaging result is not for the purpose of restoring the true scattering properties of the target, but for the purpose of clearer and thinner contours of the target, then another idea can be considered. Consider introducing a transformation function ψ ( · ) into Equation (6). The purpose of the transformation function is to make the contour in the image clearer and thinner, but to avoid changing the overall shape of the original contour in the image. In order to satisfy this requirement, ψ ( · ) should be a monotone function. The transformed image is denoted as
I a = n = 1 N p ψ ( S i n t ( τ n ) )
In order to evaluate the thinning degree of contour after transformation, appropriate evaluation indexes should be defined. Considering that the clearer and thinner the contour, the smaller the target area, the target area can be used as one of the evaluation factors. However, if the area decreases and the perimeter also decreases, it becomes the reduction of the overall area, rather than the thinning of the contour width, which fails to achieve the purpose of contour thinning. Therefore, the perimeter of the area should also be taken into account. Based on the above two points, a simple and intuitive evaluation index D(I) of contour thinning degree is defined. D(I) is defined as the perimeter of the pixels of all the target areas in a binary image divided by the area of the pixels of all the target areas.
D ( I ) = pixel   perimeter   of   target pixel   area   of   target
After introducing the evaluation index of contour thinning degree, the optimization problem can be described as follows:
arg max ψ ( · )   D ( I m ) ,   s . t .   ψ ( · ) m o n o t o n e   f u n c t i o n
Equation (10) is a compound function optimization problem, which cannot be solved directly. It probably does not have an optimal solution, but a suboptimal solution can be constructed based on given conditions. The problem description is modified to
Find   the   appropriate   ψ ( · ) , s . t . { D ( I m ) > D ( I m ) ψ ( · ) m o n o t o n e   f u n c t i o n
Intuitively, there are two ways to improve the contour thinning degree, one is to increase the perimeter, the other is to reduce the area. Firstly, the perimeter can be increased by adding new area, or by corroding the existing area to make the edges more complicated. However, neither method obviously makes the contour thinner. Therefore, it can only be achieved by reducing the area. If the stripes of the sub-aperture image can be narrowed, it is possible to reduce the target area of the final superimposed image.
Reducing the area can be achieved by stretching the modulus of the sub-aperture image, that is, enhancing the region with high modulus and greatly reducing the region with low modulus. This appears on the image as a contrast stretch transformation of the image, leaving only the highlights and greatly reducing the lower brightness. The usual contrast stretch transformation includes gamma transformation, piecewise linear transformation, histogram equalization and so on.
Through the above analysis, ψ ( · ) is constructed as the modulus stretch function, which enhances the high modulus and weakens the low modulus.

2.3. Contour Thinning Algorithm

The process of contour thinning algorithm is as follows:
  • The Nfft-point IFFT is performed on the echo data S ( f k , τ n ) received at each slow time τ n to obtain an M-point inverse transform sequence, and then cyclically shifts with the FFTshift(•) function [11].
    s 0 ( m , τ n ) = FFT shift { IFFT [ S ( f k , τ n ) ] }
  • The real coordinates corresponding to each pixel on the image are calculated, and the Δ R ( r , τ n ) corresponding to each pixel is calculated [11].
    { Δ R ( r , τ n ) = d a 0 ( r , τ n ) d a ( r , τ n ) d a 0 ( r , τ n ) = ( x a ( τ n ) x ) 2 + ( y a ( τ n ) y ) 2 + ( z a ( τ n ) z ) 2 d a ( r , τ n ) = x a 2 ( τ n ) + y a 2 ( τ n ) + z a 2 ( τ n ) ,
    where d a ( r , τ n ) denotes the distance of the antenna from the origin of the ground at coordinate r and the time τ n . d a 0 ( r , τ n ) denotes the distance of the antenna from the target in the ground.
  • Linear interpolation is performed on s 0 ( m , τ n ) to obtain the estimated values of the corresponding IFFT transformation at all Δ R ( r , τ n ) points, and the sub-imaging data at each time τ n is obtained by multiplying by the data the compensation term [11].
    s i n t ( r , τ n ) = Interp [ s 0 ( m , τ n ) ] ( + j 4 π f 0 Δ R ( r , τ n ) c ) ,
    S i n t ( τ n ) = { s i n t ( r x , y , τ n ) } | x , y [ 1 , L ] ,
    where L denotes the real side length of the imaged scene.
  • Let θ denote the value of the synthetic aperture. The sub-images of all τ n in the range of the synthetic aperture are superimposed, and then the modulus is stretched for each sub-aperture image. Finally, all the sub-aperture images are superimposed to obtain the final imaging results.
    I a = θ i ψ ( τ n θ S i n t ( τ n ) )
The usual contrast stretch transformation includes gamma transformation, piecewise linear transformation, histogram equalization, and so on. Among them, histogram transformation will improve the brightness of dark areas when the low-brightness area is in the majority, which is not in line with our requirement to significantly reduce the low-modulus part. Gamma transformation and piecewise linear transformation, on the other hand, can adjust parameters so that highlights remain and dark areas are significantly reduced. Therefore, this paper mainly studies the effect of modulus stretch on image contour thinning by using gamma transformation and piecewise linear transformation.
Gamma transformation in image processing is based on real number transformation, while radar data is complex data, so it is slightly modified to achieve modulus stretch. Gamma transformation based on modulus stretch is shown as follows:
ψ 1 ( x ) = | x | γ x
Similarly, the piecewise linear transformation of complex numbers is given in Equation (18). Where T is the threshold value, k 1 is the enhancement coefficient, and k 2 is the inhibition coefficient.
ψ 2 ( x ) = { k 1 x ,   | x | T k 2 x ,   | x | < T

3. Experimental Results

3.1. Simulated Data Imaging

The simulation of CSAR data generation and imaging was carried out by programming on the MATLAB platform. The simulation radar had a carrier frequency of 10 GHz, a bandwidth of 600 MHz, a slant distance of 10 km from the ground scene origin, a height angle of 30 degrees, a frequency sampling of 128 points, a steering angle step of 0.1 degrees, a total of one cycle of rotation and a circumference sampling of 3600 points. The length of the imaging area was 20 m, and the imaging resolution was 200 × 200 pixels. All targets are assumed to be plane targets with zero height.

3.1.1. Single Line Target Imaging

The target was a line consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinate of the line target was 0. Figure 2a shows the target location map. Figure 2b shows the imaging of the traditional backprojection algorithm with a synthetic aperture of 5°. Figure 2c is the full aperture imaging of the traditional backprojection algorithm.
Figure 3 is the imaging result of adding the gamma transformation to the modulus stretch based on the backprojection algorithm. Figure 3a–d shows the results of gamma transformed imaging with a synthetic aperture of 5°. Figure 3e–h shows the results of coherent imaging with full aperture after gamma transformation of images at each time τ n . The gamma transformation parameters of a–e and e–h in Figure 3 were 1.2, 2, 3.5 and 5, respectively.
Figure 4 is the imaging result of adding the piecewise linear transformation to the modulus stretch based on the backprojection algorithm. Figure 4a–d shows the results of piecewise linear transformed imaging with a synthetic aperture of 5°. Figure 4e–h shows the results of coherent imaging with a full aperture after piecewise linear transformation of images at each time τ n .
It can be seen from Figure 3 and Figure 4 that the results of synthetic aperture imaging are better than that of full aperture imaging. Piecewise linear transformation is better than gamma transformation in both full aperture imaging and 5° synthetic aperture imaging. With the increase of gamma transformation parameters and the threshold of piecewise linear transformation, the contour of the imaging results tends to extend outward along the contour line. The higher the parameter value, the more obvious the contour line extension. This helps to strengthen contours, enhance features, and improve the accuracy of identification. But the cost is that imaging noise is also increasing, and imaging accuracy is getting worse. Considering that the noise of the gamma transformation was too large, the gamma transformation was no longer selected in the subsequent simulations, but piecewise linear transformation was directly adopted.

3.1.2. Double-Line Target Imaging

The target was two lines consisting of dense points with an interval of 0.1 between x = −5 and 5. The y coordinates of the two lines were ±0.2, ±0.15 and ±0.1, that is, the distance between the two lines was 0.4, 0.3 and 0.2, as shown in Figure 5.
Figure 6a–c is the imaging result of a traditional backprojection algorithm with a synthetic aperture of 5° when two lines are 0.4, 0.3 and 0.2 apart. Figure 6d–f is the imaging results of Figure 6a–c after piecewise linear transformation based on a 5° synthetic aperture. It can be seen that the traditional backprojection algorithm can distinguish the two lines when they are 0.4 apart. When the distance is shortened to 0.3, it is more difficult to distinguish the two lines. When the distance is shortened to 0.2, the imaging results of the traditional backprojection algorithm are completely indistinguishable from the two lines. However, the imaging results after piecewise linear transformation can distinguish the two lines in all cases.
Figure 7a–c is the imaging result of traditional backprojection algorithm with full synthetic aperture when two lines are 0.4, 0.3 and 0.2 apart. Figure 7d–f corresponds to Figure 7a–c imaging results of full aperture after piecewise linear transformation at each slow time τ n . As can be seen from Figure 7, the two lines can be distinguished from the imaging results after piecewise linear transformation in three cases, while the traditional backprojection algorithm cannot distinguish the two lines when the distance is less than 0.2.
The above simulation results show that the imaging result after piecewise linear transformation is better than that of the traditional backprojection algorithm. The imaging results after piecewise linear transformation can distinguish the very close targets which are mixed together in the traditional backprojection algorithm imaging.

3.2. Vehicle Imaging in Gotcha Dataset

The Target Discrimination Research subset of the Gotcha dataset published by the Air Force Research Laboratory for circular flight observations in a parking lot was used for the imaging test. The airborne synthetic aperture radar detects data in a 5 km diameter area on 31 altitude orbits and extracts phase history data of 56 targets from the large dataset. Targets include 33 civilian vehicles (with repeat models) and 22 reflectors and an open area. The circular synthetic aperture radar provides 360 degrees of azimuth around each target. A Chevrolet Impala LT model in the dataset was selected for the modulus stretch imaging with the data label fcarA1 and the orbit number 214. The results are shown in the following figure.
Figure 8a shows the imaging of the traditional backprojection algorithm with a synthetic aperture of 5°. Figure 8e is a full aperture imaging of the traditional backprojection algorithm. Figure 8b–d is the imaging results after piecewise linear transformation based on a 5° synthetic aperture. Figure 8f–h is the imaging results of a full aperture after piecewise linear transformation at each slow time τ n . The parameter k 1 of the piecewise linear transformation was a fixed value of 1.2, k 2 was a fixed value of 0.1, and T was 0.5, 0.8 and 0.9, in order.
It can be clearly seen from the figure that the vehicle image obtained by the traditional backprojection algorithm is fuzzy and cannot represent the contour structure and details of the vehicle well. The imaging results after piecewise linear transformation significantly improve the contour information of vehicle imaging and can distinguish the double-line fine contour on one side of the vehicle, which cannot be distinguished from the two close contours in the traditional backprojection algorithm imaging. These fine structures may play an active role in target recognition. At the same time, it is noticed that full aperture imaging can not only thin the contour, but also lose part of the contour details, which makes the imaging accuracy worse. However, the piecewise linear transformation based on a synthetic aperture can thin the contour and lose less of the original contour, so the imaging result is better.
Figure 9 is the image of four different models of vehicles after modulus stretched by piecewise linear transformation. Figure 9a–d is the imaging results of using the traditional backprojection algorithm for four models of vehicles, namely Chevrolet Impala LT, Mitsubishi Galant ES, Toyota Highlander and Chevrolet HHR LT. Figure 9e–f corresponds to Figure 9a–d for imaging after the modulus is stretched using piecewise linear transformation. The parameters of the piecewise linear transformation were k 1 = 1.2, k 2 = 0.1 and T = 0.9. It can be seen that the image of all models of vehicles after modulus stretched greatly strengthens the contour features of the target and improves the recognizability. Subsequently, target recognition can be carried out by detecting the aspect ratio of target contour, the relative position of internal and external contour, contour angle, and other features.
In order to obtain the optimal threshold parameters of the piecewise linear transformation, different thresholds T were used for imaging different vehicles, and the degree of contour thinning was calculated. The curves are shown in Figure 10. As can be seen from the figure, when the threshold T is between 0.9 and 0.96, the degree of contour thinning reaches a maximum value and then decreases rapidly as T increases. The experiment shows that when k 1 = 1.2 and k 2 = 0.1, for most of the vehicle data in the Gotcha dataset, the maximum degree of contour thinning can be obtained when the threshold T is about 0.94.
Data for 150 vehicles were randomly selected from the Goctha dataset, and they were individually imaged by the traditional backprojection algorithm and by the modulus stretch with piecewise linear transformation. Then the degree of contour thinning was calculated and is shown in Figure 11.
It can be seen from the figure that the degree of contour thinning after the modulus is stretched by piecewise linear transformation is generally higher than the traditional backprojection algorithm. After modulus stretch, the degree of contour thinning is increased by 134% on average, which greatly enhances the recognizability of the target.
We have made a preliminary study of target classification based on contour thinning imaging with piecewise linear transformation. Preliminary experimental results show that high classification accuracy can be achieved by using modulus stretch imaging combined with an appropriate classification method.

4. Conclusions

This paper presents a contour thinning imaging method. Based on the backprojection algorithm, the method performs modulus stretch on each sub-aperture image to make the target’s perimeter area ratio larger and the target contour thinner and clearer. The experimental results using the Goctha dataset show that the contour of the image after modulus stretch using piecewise linear transformation is thinner and clearer than the original. The degree of contour thinning is 134% higher than the original average. Based on this method, the target classification algorithm based on contour thinning imaging will be further studied.

Author Contributions

Conceptualization, R.H.; Formal analysis, R.H. and K.Z.; Funding acquisition, Z.P.; Investigation, R.H. and K.Z.; Methodology, R.H.; Project administration, Z.P.; Software, R.H.; Supervision, Z.P.; Visualization, R.H.; Writing—original draft, R.H.; Writing—review and editing, Z.P.

Funding

This work is supported by National Natural Science Foundation of China (61571096, 61775030), the Key Laboratory Fund of Beam Control, Chinese Academy of Sciences (2017LBC003) and Sichuan Science and Technology Program (2019YJ0167).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peng, Z.; Wang, H.; Zhang, G.; Yang, S. Spotlight SAR images restoration based on tomography model. In Proceedings of the 2009 Asia-Pacific Conference on Synthetic Aperture Radar, APSAR 2009, Xian, China, 26–30 October 2009; pp. 1060–1063. [Google Scholar]
  2. Peng, Z.; Zhang, J.; Meng, F.; Dai, J. Time-Frequency Analysis of SAR Image Based on Generalized S-Transform. In Proceedings of the 2009 International Conference on Measuring Technology and Mechatronics Automation (ICMTMA2009), Zhangjiajie, China, 11–12 April 2009; Volume 1, pp. 556–559. [Google Scholar]
  3. Peng, Z.; Liu, S.; Tian, G.; Chen, Z.; Tao, T. Bridge detection and recognition in remote sensing SAR images using pulse coupled neural networks. In Proceedings of the 7th International Symposium on Neural Networks, ISNN 2010, Shanghai, China, 6–9 June 2010; Volume 67, pp. 311–320. [Google Scholar]
  4. Tao, T.; Peng, Z.; Yang, C.; Wei, F.; Liu, L. Targets detection in SAR image used coherence analysis based on S-transform. In Proceedings of the International Conference on Electric and Electronics, EEIC 2011, Nanchang, China, 20–22 June 2011; Volume 98, pp. 1–9. [Google Scholar]
  5. Soumekh, M. Reconnaissance with slant plane circular SAR imaging. IEEE Trans. Image Process. 1996, 5, 1252–1265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Cantalloube, H.M.J.; Koeniguer, E.C.; Oriot, H. High resolution SAR imaging along circular trajectories. In Proceedings of the International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 850–853. [Google Scholar]
  7. Casteel, J.C.H.; Gorham, L.A.; Minardi, M.J.; Scarborough, S.M.; Naidu, K.D.; Majumder, U.K. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment. In Algorithms for Synthetic Aperture Radar Imagery XIV; International Society for Optics and Photonics: Orlando, FL, USA, 2007. [Google Scholar]
  8. Ertin, E.; Austin, C.D.; Sharma, S.; Moses, R.L.; Potter, L.C. GOTCHA experience report: Three-dimensional SAR imaging with complete circular apertures. In Proceedings of the Defense and Security Symposium, Orlando, FL, USA, 10–11 April 2007. [Google Scholar]
  9. Tan, W.X.; Wang, Y.P.; Wen, H.; Wu, Y.R.; Li, N.J.; Hu, C.F.; Zhang, L.X. Circular SAR experiment for human body imaging. In Proceedings of the 1st Asian And Pacific Conference on Synthetic Aperture Radar, Huangshan, China, 5–9 November 2007; pp. 90–93. [Google Scholar]
  10. Dungan, K.E.; Potter, L.C.; Blackaby, J.; Nehrbass, J. Discrimination of civilian vehicles using wide-angle SAR. In Proceedings of the SPIE Defense and Security Symposium, Orlando, FL, USA, 17–18 March 2008. [Google Scholar]
  11. Gorham, L.A.; Moore, L.J. SAR image formation toolbox for MATLAB. In Proceedings of the SPIE Defense, Security, and Sensing, Orlando, FL, USA, 8–9 April 2010. [Google Scholar]
  12. Dungan, K.E.; Ash, J.N.; Nehrbass, J.W.; Parker, J.T.; Gorham, L.A.; Scarborough, S.M. Wide angle SAR data for target discrimination research. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, MD, USA, 25–26 April 2012. [Google Scholar]
  13. Gianelli, C.D.; Xu, L. Focusing, imaging, and atr for the gotcha 2008 wide angle sar collection. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, MD, USA, 1–2 May 2013. [Google Scholar]
  14. Zhang, X.K.; Zhang, Y.H.; Jiang, J.S. Circular SAR imaging approximated by spotlight processing. In Proceedings of the 7th International Symposium on Antennas, Propagation and EM Theory, Guilin, China, 26–29 October 2006; pp. 1–4. [Google Scholar]
  15. Zhang, X.K. Study on Imaging Mechanism and Algorithm of High-Resolution Circular Synthetic Aperture Radar. Ph.D. Thesis, Chinese Academy of Sciences, Beijing, China, 2007. [Google Scholar]
  16. Lin, Y.; Hong, W.; Tan, W.X.; Wang, Y.P.; Wu, Y.R. Interferometric Circular SAR Method for Three-Dimensional Imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 1026–1030. [Google Scholar] [CrossRef]
  17. Lin, Y.; Hong, W.; Tan, W.X.; Wu, Y.R. Extension of Range Migration Algorithm to Squint Circular SAR Imaging. IEEE Geosci. Remote Sens. Lett. 2011, 8, 651–655. [Google Scholar] [CrossRef]
  18. Lin, Y.; Hong, W.; Tan, W.X.; Wang, Y.P.; Xiang, M.S. Airborne Circular Sar Imaging: Results at P-Band. In Proceedings of the International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 5594–5597. [Google Scholar]
  19. Hong, W. Progress in Circular SAR Imaging Technique. J. Radars 2012, 1, 124–135. [Google Scholar] [CrossRef]
  20. Teng, F.; Hong, W.; Lin, Y. Aspect Entropy Extraction Using Circular SAR Data and Scattering Anisotropy Analysis. Sensors 2019, 19, 346. [Google Scholar] [CrossRef] [PubMed]
  21. Ponce, O.; Prats, P.; Rodriguez-Cassola, M.; Scheiber, R.; Reigber, A. Processing of Circular Sar Trajectories with Fast Factorized Back-Projection. In Proceedings of the International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3692–3695. [Google Scholar]
  22. Liao, Y.; Zhou, S.; Xing, M.D.; Bao, Z. An Imaging Algorithm for Airborne Circular Scanning SAR Based on the Method of Series Reversion. J. Electron. Inf. Technol. 2012, 34, 2587–2593. [Google Scholar] [CrossRef]
  23. Liao, Y.; Xing, M.D.; Zhang, L.; Bao, Z. A novel modified Omega-K algorithm for circular trajectory scanning SAR imaging using series reversion. EURASIP J. Adv. Signal Process. 2013, 2013, 64. [Google Scholar] [CrossRef] [Green Version]
  24. Li, H.L.; Zhang, L.; Xing, M.D.; Bao, Z. Expediting back-projection algorithm for circular SAR imaging. Electron. Lett. 2015, 51, 785–786. [Google Scholar] [CrossRef]
  25. Farhadi, M.; Jie, C. Distributed compressive sensing for multi-baseline circular SAR image formation. In Proceedings of the International Conference on Imaging Systems and Techniques, IST 2017, Beijing, China, 18–20 October 2017. [Google Scholar]
  26. Izumi, Y.; Demirci, S.; Bin, B.; Mohd, Z.; Watanabe, T.; Sumantyo, J.T.S. Analysis of Dual- and Full-Circular Polarimetric SAR Modes for Rice Phenology Monitoring: An Experimental Investigation through Ground-Based Measurements. Appl. Sci. 2017, 7, 368. [Google Scholar] [CrossRef]
  27. Zhao, Y.; Lin, Y.; Hong, W.; Yu, L.J. Adaptive imaging of anisotropic target based on circular-SAR. Electron. Lett. 2016, 52, 1406–1407. [Google Scholar] [CrossRef]
  28. Chen, L.; An, D.; Huang, X.; Zhou, Z. A 3D Reconstruction Strategy of Vehicle Outline Based on Single-Pass Single-Polarization CSAR Data. IEEE Trans. Image Process. 2017, 26, 5545–5554. [Google Scholar] [CrossRef] [PubMed]
  29. Xie, H.; Shi, S.; An, D.; Wang, G.; Wang, G.; Xiao, H.; Huang, X.; Zhou, Z.; Xie, C.; Wang, F.; et al. Fast Factorized Backprojection Algorithm for One-Stationary Bistatic Spotlight Circular SAR Image Formation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1494–1510. [Google Scholar] [CrossRef]
  30. Sun, C.; Wang, B.; Fang, Y.; Song, Z.; Wang, S. Multichannel and Wide-Angle SAR Imaging Based on Compressed Sensing. Sensors 2017, 17, 295. [Google Scholar] [CrossRef] [PubMed]
  31. Liu, T.; Pi, Y.; Yang, X. Wide-Angle CSAR Imaging Based on the Adaptive Subaperture Partition Method in the Terahertz Band. IEEE Trans. Terahertz Sci. Technol. 2018, 8, 165–173. [Google Scholar] [CrossRef]
  32. Wu, B.; Gao, Y.; Ghasr, M.T.; Zoughi, R. Resolution-Based Analysis for Optimizing Subaperture Measurements in Circular SAR Imaging. IEEE Trans. Instrum. Meas. 2018, 67, 2804–2811. [Google Scholar] [CrossRef]
  33. Hao, J.; Li, J.; Pi, Y. Three-Dimensional Imaging of Terahertz Circular SAR with Sparse Linear Array. Sensors 2018, 18, 2477. [Google Scholar] [CrossRef] [PubMed]
  34. Wei, Z.; Zhang, B.; Wu, Y. Accurate Wide Angle SAR Imaging Based on LS-CS-Residual. Sensors 2019, 19, 490. [Google Scholar] [CrossRef]
  35. Wang, X.; Peng, Z.; Zhang, P.; He, Y. Infrared Small Target Detection via Nonnegativity-Constrained Variational Mode Decomposition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1700–1704. [Google Scholar] [CrossRef]
  36. Liu, X.; Chen, Y.; Peng, Z.; Wu, J.; Wang, Z. Infrared Image Super-Resolution Reconstruction Based on Quaternion Fractional Order Total Variation with Lp Quasinorm. Appl. Sci. 2018, 8, 1864. [Google Scholar] [CrossRef]
  37. Zhang, T.; Wu, H.; Liu, Y.; Peng, L.; Yang, C.; Peng, Z. Infrared small target detection based on non-convex optimization with Lp-norm constraint. Remote Sens. 2019, 11, 559. [Google Scholar] [CrossRef]
  38. Peng, L.; Zhang, T.; Liu, Y.; Li, M.; Peng, Z. Infrared Dim Target Detection Using Shearlet’s Kurtosis Maximization under Non-Uniform Background. Symmetry 2019, 11, 723. [Google Scholar] [CrossRef]
Figure 1. Single azimuth stripe image: (a) azimuth = 0°; (b) azimuth = 45°; (c) azimuth = 100°.
Figure 1. Single azimuth stripe image: (a) azimuth = 0°; (b) azimuth = 45°; (c) azimuth = 100°.
Applsci 09 02728 g001
Figure 2. Single line target imaging: (a) The target location map; (b) the imaging of the traditional backprojection algorithm with a synthetic aperture of 5°; (c) The full aperture imaging of the traditional backprojection algorithm.
Figure 2. Single line target imaging: (a) The target location map; (b) the imaging of the traditional backprojection algorithm with a synthetic aperture of 5°; (c) The full aperture imaging of the traditional backprojection algorithm.
Applsci 09 02728 g002
Figure 3. Single line target imaging based on gamma transformation: (a,e) γ = 1.2; (b,f) γ = 2; (c,g) γ = 3.5; (d,h) γ = 5.
Figure 3. Single line target imaging based on gamma transformation: (a,e) γ = 1.2; (b,f) γ = 2; (c,g) γ = 3.5; (d,h) γ = 5.
Applsci 09 02728 g003
Figure 4. Single line target imaging based on piecewise linear transformation: (a,e) T = 0.5, k 1 = 1.5, k 2 = 0.5; (b,f) T = 0.8, k 1 = 1.2, k 2 = 0.1; (c,g) T = 0.9, k 1 = 1.2, k 2 = 0.1; (d,h) T = 0.95, k 1 = 1.2, k 2 = 0.1.
Figure 4. Single line target imaging based on piecewise linear transformation: (a,e) T = 0.5, k 1 = 1.5, k 2 = 0.5; (b,f) T = 0.8, k 1 = 1.2, k 2 = 0.1; (c,g) T = 0.9, k 1 = 1.2, k 2 = 0.1; (d,h) T = 0.95, k 1 = 1.2, k 2 = 0.1.
Applsci 09 02728 g004
Figure 5. Double-line target location map. The distance between the two lines is (a) 0.4; (b) 0.3; (c) 0.2.
Figure 5. Double-line target location map. The distance between the two lines is (a) 0.4; (b) 0.3; (c) 0.2.
Applsci 09 02728 g005
Figure 6. Double-line target imaging with a synthetic aperture of 5°. The piecewise linear transformation parameters are T = 0.95, k 1 = 1.2, k 2 = 0.1: (a,d) The distance between the two lines is 0.4; (b,e) The distance between the two lines is 0.3; (c,f) The distance between the two lines is 0.2.
Figure 6. Double-line target imaging with a synthetic aperture of 5°. The piecewise linear transformation parameters are T = 0.95, k 1 = 1.2, k 2 = 0.1: (a,d) The distance between the two lines is 0.4; (b,e) The distance between the two lines is 0.3; (c,f) The distance between the two lines is 0.2.
Applsci 09 02728 g006
Figure 7. Double-line target imaging with full synthetic aperture. The piecewise linear transformation parameters are T = 0.95, k 1 = 1.2, k 2 = 0.1: (a,d) The distance between the two lines is 0.4; (b,e) The distance between the two lines is 0.3; (c,f) The distance between the two lines is 0.2.
Figure 7. Double-line target imaging with full synthetic aperture. The piecewise linear transformation parameters are T = 0.95, k 1 = 1.2, k 2 = 0.1: (a,d) The distance between the two lines is 0.4; (b,e) The distance between the two lines is 0.3; (c,f) The distance between the two lines is 0.2.
Applsci 09 02728 g007aApplsci 09 02728 g007b
Figure 8. Imaging of vehicle targets in the Gotcha dataset by piecewise linear transformation with different thresholds: (a,e) No piecewise linear transform; (b,f) T = 0.5; (c,g) T = 0.8; (d,h) T = 0.9.
Figure 8. Imaging of vehicle targets in the Gotcha dataset by piecewise linear transformation with different thresholds: (a,e) No piecewise linear transform; (b,f) T = 0.5; (c,g) T = 0.8; (d,h) T = 0.9.
Applsci 09 02728 g008
Figure 9. Imaging results of different models of vehicle targets in the Gotcha dataset by piecewise linear transformation: (a,e) Chevrolet Impala LT; (b,f) Mitsubishi Galant ES; (c,g) Toyota Highlander; (d,h) Chevrolet HHR LT.
Figure 9. Imaging results of different models of vehicle targets in the Gotcha dataset by piecewise linear transformation: (a,e) Chevrolet Impala LT; (b,f) Mitsubishi Galant ES; (c,g) Toyota Highlander; (d,h) Chevrolet HHR LT.
Applsci 09 02728 g009
Figure 10. The degree of contour thinning varies with threshold T on different models: (a) Chevrolet Impala LT; (b) Mitsubishi Galant ES; (c) Toyota Highlander; (d) Chevrolet HHR LT.
Figure 10. The degree of contour thinning varies with threshold T on different models: (a) Chevrolet Impala LT; (b) Mitsubishi Galant ES; (c) Toyota Highlander; (d) Chevrolet HHR LT.
Applsci 09 02728 g010
Figure 11. Comparison of contour thinning degree between traditional backprojection imaging and modulus stretch imaging: (a) 5° aperture imaging; (b) 10° aperture imaging; (c) Full aperture imaging.
Figure 11. Comparison of contour thinning degree between traditional backprojection imaging and modulus stretch imaging: (a) 5° aperture imaging; (b) 10° aperture imaging; (c) Full aperture imaging.
Applsci 09 02728 g011

Share and Cite

MDPI and ACS Style

Hu, R.; Peng, Z.; Zheng, K. Modulus Stretch-Based Circular SAR Imaging with Contour Thinning. Appl. Sci. 2019, 9, 2728. https://doi.org/10.3390/app9132728

AMA Style

Hu R, Peng Z, Zheng K. Modulus Stretch-Based Circular SAR Imaging with Contour Thinning. Applied Sciences. 2019; 9(13):2728. https://doi.org/10.3390/app9132728

Chicago/Turabian Style

Hu, Rongchun, Zhenming Peng, and Kelong Zheng. 2019. "Modulus Stretch-Based Circular SAR Imaging with Contour Thinning" Applied Sciences 9, no. 13: 2728. https://doi.org/10.3390/app9132728

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop