Next Article in Journal
Optimal Control of a Compact Converter in an AC Microgrid
Previous Article in Journal
Optimizing Generation Capacities Incorporating Renewable Energy with Storage Systems Using Genetic Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar

College of Electronic Science, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
Electronics 2018, 7(7), 101; https://doi.org/10.3390/electronics7070101
Submission received: 4 June 2018 / Revised: 20 June 2018 / Accepted: 22 June 2018 / Published: 25 June 2018

Abstract

:
High-resolution three-dimensional (3D) images can be acquired by the planar Multiple-Input Multiple-Output (MIMO) array radar making future work like detection and tracking easier. However, regarding portability and to save the costs of radar system, MIMO radar array adopts sparse type with limited number of antennas, so the imaging performance of a MIMO radar system is limited. In this paper, the 3D back projection imaging algorithm is verified by the experimental results of planar MIMO array for human body and an enhanced radar imaging method is proposed. The Lucy-Richardson (LR) algorithm based on deconvolution that is normally used for optical images is applied in radar images. Since the LR algorithm can amplify the noise level in a noise-contaminated system, a regularization method based on the Total Variation constraint is further incorporated in the LR algorithm to suppress the ill-posed characteristics. The proposed method shows a higher image Signal-to-Noise Ratio, a faster rate of convergence, a higher structure similarity and a smaller relative error compared to some similar methods. In the meantime, it also reduces the loss of image information after image enhancement and improves the radar image quality (get less grating lobe and clearer human limbs). The proposed method overcomes the disadvantages mentioned above and is verified by simulation experiment and real data measurement.

1. Introduction

Radar imaging technology is an important field in the development of radar techniques [1,2]. Radar imaging technology can work in all-weather and all-day conditions with a high resolution and anti-inference capability, while the optical imaging technology can be seriously affected by weather, visible light and other environmental disturbances. With the high-resolution ultra-wideband radar or sensor radar network have become some new tools of target detection, indoor localization, positioning applications and imaging [3,4,5,6,7,8]. More information can be obtained as radar imaging is gradually expanding the scenes from two-dimensional (2-D) (range-azimuth) to three-dimensional (3-D) (range-azimuth-height) [9]. Radar imaging can recognize the human body and various targets in free space of invisible light, so it plays a significant role in many applications, such as military, anti-terrorism, security [8], disaster relief, etc.
Multiple-Input Multiple-Output (MIMO) radar imaging is widely applied in scenes where real-time data acquisition is needed as it has a high data obtaining acquisition rate [10,11]. Multi-antenna structure is adopted for both receiving and transmitting terminals of MIMO radar to obtain observation channels that is much more than the number of the real receiving and transmitting array elements [12,13]. MIMO radar can obtain more information in parallel multi-channels so it shows the capacity to collect real-time information with different amplitudes, time delays and phases. Reasonable arrangement of the MIMO radar antenna can reduce the complexity of imaging processing and save the cost of hardware implementation [14]. Radar imaging algorithm can reconstruct the image of targets. At present, widely-applied imaging algorithms include Range Doppler [15], Chirp Scaling [16], ωK [17], Back projection (BP) [18], etc. Another direction for the radar imaging that is a hybrid of the Kirchhoff migration and Stolt’s frequency-wavenumber (F-K) migration [19]. The inter-element spacing is larger than a wavelength and thus grating lobes will occur inevitably. The Coherence Factor (CF) weighted method [20] can suppress the grating lobe though the suppression performance is not that acceptable when the target energy is weak. From the perspective of signal processing, the output of a MIMO radar imaging system is a low-pass filtered result of original image due to the low-pass property of the antenna system. Hence, information beyond the cut-off frequency is lost. From the perspective of image processing, the output of an imaging system is a convolution of the original image and the system function. This convolution process is referred to as image degeneration.
Radar imaging resolution is limited by the Rayleigh criterion [21]. Some super-resolution methods are used to increase the image resolution, such as Multiple Signal Classification [22], Capon [23], etc. These methods can estimate the high-resolution details of the original signal and apply them to the location and direction-finding of target while the complexity is high. Compressed Sensing (CS) [8] has the potential of higher resolution than classical imaging methods. Due to the signal process in CS radar imaging, in which the 2D radar image is taken as one-dimensional signal, the sensing matrix is very large, taking so much memory and leading to the problem that the matrix-vector multiplication in recovery algorithms is very time-consuming [24]. The recently proposed enhanced imaging method based on wavelet transform has a simple process, however, the edge of the texture area has no self-similarity and the enhanced imaging method based on neural network has a good performance despite of the huge computational cost, the long-time training process and the lack of real-time property [25,26].
Enhanced imaging methods based on deconvolution have the merits of understandable principle, easy derivation process, low computational complexity and less information loss during processing. Deconvolution is mainly implemented by two main categories of methods. The first category is linear methods: Classic inverse filtering can enhance an image but it is only applicable in noise-free scenes [27]. Wiener filtering need to assume that an image satisfies the generalized stationary process [28]. Kalman filtering has been applied in [29] which has a low adaptive degree and a high computational cost when partial features are complex. Pseudo-inverse algorithm of singular value decomposition has been reported in [30], however, this method is very unstable when noise is involved. The second class of enhanced methods is named nonlinear methods which mainly include two methods—Lucy-Richardson (LR) algorithm [31,32] and regularization algorithm [33,34,35,36,37,38,39]. The former is based on Bayesian analysis but can amplify the noise level and the latter [33] suppresses the ill-posedness of deconvolution by introducing restraint conditions. Nonlinear iterative technique has become increasingly accepted as a tool for image enhancement, which often performs better than linear methods. In many cases, images need to be modeled by the Poisson random field, such as astronomical images, confocal microscope images, CT images and PET images, etc. These images are results of lots of photons that follow the Poisson random process in a certain time. LR algorithm can fully consider the statistical fluctuation of signal [40] and have the ability of frequency spectrum extrapolation [31,32]. In Boutet de Monvel et al. [41,42]. The authors use a LR algorithm with maximum entropy regularization (LRMER) (which is clearly explained in [41]) to improve the limits of deconvolution. LR algorithm performs well in radar image enhancement [43]. The regularization method is a mathematical approach to transform ill-posed problem into a well-posed problem. The Tikhonov regularization [44] and the Tikhonov-Miller (TM) regularization [45,46] restrains the noise efficiently during image enhancement but can result in over-smoothing. An improved image deconvolution enhancement model is proposed but the algorithm blurs details and edges since it cannot distinguish noise in radar image.
The TV regularization method overcomes the problem of amplified noise which is brought by the LR algorithm and ensure the existence, uniqueness and continuity of the solution [46]. In this paper, the object of our research is human body, we can get clearer imaging result (recognize the human body part) by some class of the deconvolution method. We present the deconvolution method we are using: the LR algorithm. Since it does not converge to a noise-free solution, we regularize it using a functional derived from a Total-Variation (TV) [46]. The modified method obtains a high-resolution radar image while maintaining edges of the radar image and solving the ill-posed problem that deconvolution amplifies noise easily. Meanwhile, the modified method transforms an ill-posed problem to a well-posed one and then a good enhanced image is obtained.
The rest of the paper is organized as follows. Section 2 presents the echo signal model, the antenna array model and the imaging algorithm. Section 3 presents the image enhancement method, the TV regularization method based on LR algorithm and the Point Spread Function (PSF) model. Meanwhile, several criterions for the algorithm are proposed and the rationalities are also analyzed. Section 4 presents comparison results between the proposed algorithm and some other algorithms by using simulation experiment and real data measurement, respectively. Finally, conclusions are drawn in Section 5.

2. MIMO Radar Imaging Model

In this section, we mainly describe the model of the MIMO radar imaging system. The working process of a MIMO radar system is shown in Figure 1a. Since signals can be transmitted from multiple antennas and observed from different angles at the receiving terminal, MIMO radar can achieve a better ability of redundant data processing from multichannel at the receiving terminal. According to the above characteristics, MIMO radar can effectively improve the performance of measurement about the target parameters [13].
The stepped-frequency signal is considered in this paper, which is widely used in radar imaging. When compared to pulse signal, the stepped-frequency signal can easily achieve a higher transmission power and expand the bandwidth. In comparison to the chirp signal, the stepped-frequency signal weakens the limit of minimum detection distance to reduce the complexity of the receiver. The stepped-frequency signal considered here consists of frequency points with an impulse width of t τ and a frequency interval of Δ f . Suppose the initial frequency point corresponding to the first impulse is f 0 , then the transmitted impulse string can constitute the ultra-wideband signal with a frequency scope of f 0 ~ f 0 + Δ f and a bandwidth of B = Δ f ; T p denotes the pulse repetition period, T τ denotes the pulse repetition interval and f i denotes the i-th pulse, where f i = f 0 + i Δ f . Figure 1b shows the diagram of the stepped-frequency signal. Combined with Figure 1b, the stepped-frequency signal can be written as:
p ( t ) = i = 0 1 e x p ( j 2 π ( f o + i Δ f ) t ) r e c t ( t t τ / 2 i T p t τ )
where r e c t ( ) denotes rectangular window function.
In this paper, a BP algorithm based on stepped-frequency signal is used for MIMO radar imaging. The BP algorithm, a precise imaging algorithm based on time-domain processing, is derived from computer tomography [47]. The BP algorithm firstly meshes the imaging area and calculates the distance of arbitrary pixel to the scattering source and then the time delay of propagation is obtained. Finally, the radar echo signal corresponding to each scattering source is searched according to the time delay and the superposition processing is carried out. When a pixel point is at the real location of the scattering source, the superposition processing results in a large value. When a pixel point is not at the target position, a small value is obtained. The final imaging result is obtained when the focusing process of all pixel points in the entire imaging region is completed. It shows a good robustness and has no special limit for the system configuration [48]. Furthermore, it has a precise imaging result and does not require an approximation and it is suitable for arbitrary transmitting and receiving antenna arrays.
Figure 1a shows the working process of MIMO radar imaging system. Let us use Γ t , m , m = 1 , 2 , , M T to represent the coordinates of transmitting array elements and Γ r , n , n = 1 , 2 , , N R to represent the coordinates of receiving array elements. The array transmits a stepped-frequency signal p ( t ) and there is a scattering source P at Γ with a reflectivity of A ( Γ ) . The propagation delay τ m , n can be denoted as:
τ m , n = d ( Γ , Γ t , m ) + d ( Γ , Γ r , n ) c
where d ( , ) denotes the distance between the coordinates. Without considering the attenuation of signal in propagation process, the signal transmitted by the m-th transmitting array element and received by the n-th receiving array element can be written as:
S m , n ( t ) = A ( Γ ) p ( t τ m , n )
The delay of pixel Γ relative to Γ t , m and Γ r , n in imaging scene can be written as:
τ m , n = d ( Γ , Γ t , m ) + d ( Γ , Γ r , n ) c
The imaging result of single-input single-output (SISO) radar obtained by the m-th transmitting array element and the n-th receiving array element can be written as:
G m , n ( Γ ) = A ( Γ ) p ( τ m , n τ m , n )
When τ m , n = τ m , n , pixel point Γ is just at scattering source Γ . The coherent superposition of SISO radar image obtained by M T transmitting array elements and N R receiving array elements is a MIMO radar imaging result at P can be written as:
G ( Γ ) = m = 1 M T n = 1 N R A ( Γ ) p ( τ m , n τ m , n )
The BP imaging result which is the coherent superposition of images at all frequency points using stepped-frequency signal can be written as:
G ( Γ ) = i = 0 1 m = 1 M T n = 1 N R A ( Γ ) e x p [ j 2 π ( f o + i Δ f ) ( τ m , n τ m , n ) ] r e c t ( τ m , n τ m , n t τ / 2 i T p t τ )
The radar image can be obtained through the process above. In a radar system, however, due to several reasons such as the degradation caused by the antenna aperture diffraction, the influence of noise and clutter in surroundings and the limitation of hardware, the imaging performance is degraded severely. Therefore, image enhancement is needed.

3. Enhanced Imaging Method

Image enhancement must satisfy the imaging degradation model. In a real radar imaging system, the imaging result is a convolution of the original image and the system function. The enhancement process is the deconvolution of degraded imaging [49]. The system function of the imaging degradation model can be written as:
g = f H + n
where denotes the convolution operation. f denotes the original image, g denotes the degraded image, H the denotes imaging system function and n denotes the noise.
In this section, the imaging degradation model is introduced and then the Point Spread Function (PSF) model is briefly described. Finally, the LR algorithm is introduced and a modified algorithm is proposed.

3.1. PSF Model

PSF is the response of the imaging system to an ideal point target, which reflects the basic properties of the imaging system [50]. The system function H is the PSF. The features in azimuth of PSF provide an accurate measurement of the beamforming and the main lobe of PSF can measure resolution, while the position and amplitude of the grating lobe of the PSF decide the fuzzy area in imaging result. Then the system imaging performance can be evaluated with the PSF of the MIMO radar system.
Suppose P denotes an ideal point target at Γ . Let us use p ( t ) to represent the transmitted stepped-frequency signal. The frequency of p ( t ) is discretized to obtain the stepped-frequency waveform. Suppose the bandwidth of the spectrum of p ( t ) is [ f 0 , f i 1 ] , then the approximate form of the stepped-frequency signal can be written as:
p ( t ) = i = 0 1 P ( f i ) e x p ( j 2 π f i t )
where P ( f i ) denotes the weighted function corresponding to the stepped-frequency signal. To simplify the analysis, P ( f i ) is set as 1. The two-way Green’s function of the m-th transmitting array element and the n-th receiving array element corresponding to the i-th segment frequency can be written as:
G r e e n ( Γ t , m , Γ r , n , Γ , f i ) = [ 1 4 π | Γ t , m Γ | e x p ( j k i | Γ t , m Γ | ) ] [ 1 4 π | Γ r , n Γ | e x p ( j k i | Γ r , n Γ | ) ] = 1 16 π 2 | Γ t , m Γ | | Γ r , n Γ | e x p [ j k i ( | Γ t , m Γ | + | Γ r , n Γ | ) ]
where k i = 2 π / λ i denotes the wave numbers and λ i = c / f i . After the PSF scans the point target region r , according to Equations (10) and (7), the PSF can be written as:
PSF ( r , Γ ) = | m = 0 M T 1 n = 0 N R 1 i = 0 1 G r e e n ( Γ t , m , Γ r , n , Γ , f i ) G r e e n 1 ( Γ t , m , Γ r , n , r , f i ) | = | m = 0 M T 1 n = 0 N R 1 i = 0 1 | Γ t , m r | | Γ r , n r | | Γ t , m Γ | | Γ r , n Γ | e x p [ j k i ( | Γ t , m Γ | + | Γ r , n Γ | | Γ t , m r | | Γ r , n r | ) ] |

3.2. TV Regularization Method Based on LR Algorithm

LR algorithm is an image enhancement technique based on Bayesian iteration, which adds some prior knowledge of images as constraints. Let the degraded image subject to Poisson distribution and the maximum likelihood estimation method is used to perform iterative operation according to Bayesian formula [51].
The iteration expression of the LR algorithm can be written as:
f k + 1 = f k [ H T ( g H f k ) ]
where denotes the point-wise multiplication, f denotes the solution of the enhanced iterative equation and f f k when k . When noise n exists, the iterative expression can be written as:
f k + 1 = f k [ H T ( g + n H f k ) ]
where f does not always converge to f k when k . It will amplify the noise level.
We can use this algorithm to enhance radar image. The LR algorithm is not just an inversion of the degraded image but the optimal solution of a step-by-step iteration. To prevent the amplification of noise during iterations, we introduce a regularization term as a constraint condition into the LR algorithm. When processing in the image domain, the TV regularization method can maintain the boundary better and suppress the noise amplification at the same time [52]. In this paper, the regularization processing is done during the iteration expression of the LR algorithm and rational constraint condition is constructed by priori knowledge of the degraded image. The following part is the core of the proposed method.
The cost function of TV regularization can be written as:
E = E f + E r e g = H T [ 1 g H f ] + γ ( f )
where E f = H T [ 1 g H f ] is the fidelity term that reflects the degree of the approximation of the degraded image to the enhanced image. E r e g = γ ( f ) is the regularization term, ( f ) = f is the regularization function, denotes the gradient of an image, γ denotes the regularization coefficient that adjusts the proportion between the fidelity and the regularization terms. Suppose that the k-th iteration has been carried out, the iteration expression can be written as below when the TV regularization is applied to the LR algorithm:
f k + 1 = { [ H T g ( f k H ) ] } f k 1 γ d i v ( f k f k )
where denotes the computation of 2-norm. When noise n exists, the iterative expression can be expressed as:
f k + 1 = { [ H T g + n ( f k H ) ] } f k 1 γ d i v ( f k f k )
The curvature of the iterative result is: d i v ( ) = [ x 2 + y 2 2 x y ] 2 , where denotes the gradient of , d i v ( ) denotes the divergence operator, x , y denotes the first-order difference and x 2 , y 2 , x y denotes the second-order difference. We named the proposed method the LRTV algorithm. Table 1 shows the detail of the LRTV algorithm.

3.3. Mechanism of Algorithm Evaluation

We can visually judge the algorithm with the enhanced imaging result and the degraded image. In this section, we discuss the signal-to-noise ratio (SNR) of image, the termination condition of iteration, the mutual information, the structural similarity (SSIM) and the relative error (RE) between the enhanced image and degraded image [53]. We use the image SNR to show the changes in the enhanced imaging results with the LRTV algorithm and some similar algorithms, respectively. And we draw a curve of termination condition of iteration with the difference between the last enhancement result and the previous one to illustrate the efficiency of convergence. The mutual information illustrates how the LRTV algorithm reduces the loss of image information relative to other algorithms and we use the SSIM to get a quantitative measure for three algorithms between the enhanced image and the degraded image. The RE performance demonstrate the precision of similar methods in different noise levels.
Let g denotes the degraded image, g ¯ denotes the mean of g and f denotes the enhanced image. The SNR of an image is defined as follows:
image_SNR = 20 l o g 10 g g ¯ 2 g f 2
The termination condition of iteration is defined as follows:
Stop_iteration = g f 2 g 2
The RE is defined as follows:
R E = g f 2 f 2
We use the mutual information of image to evaluate the algorithm with an exact numerical value. Mutual information represents the correlation between two variables and the mutual information I ( X , Y ) between X and Y can be written as:
I ( X , Y ) = H u ( X ) + H u ( Y ) H u ( X , Y )
where H u ( X ) denotes the entropy of X , H u ( Y ) denotes the entropy of Y and H u ( X , Y ) denotes the joint information entropy. In information theory, the more similar the two variables, the greater the value of mutual information. The value of image’s SSIM is between −1 and 1 and one means fully identical to the degraded image. It can be say that the closer value of SSIM is to 1, the more the similar the two images are. The S S I M ( X , Y ) between X and Y can be written as:
S S I M ( X , Y ) = ( 2 μ X μ Y ) ( 2 σ X Y ) ( μ X 2 + μ Y 2 ) ( σ X 2 + σ Y 2 )
where μ X denotes the mean of X , μ Y denotes the mean of Y , σ X Y denotes the covariance between X and Y , σ X denotes the standard deviation of X , σ Y denotes the standard deviation of Y , σ X 2 denotes the variance of X and σ Y 2 denotes the variance of Y . The larger value of SSIM indicates that the algorithm can retain the structure information of the target in the imaging scene.

4. Experiment

In this section, we carry out a simulation experiment and a real data measurement to evaluate the method proposed in this paper. In the simulation, the radar echo is obtained by computing the Radar Cross Section (RCS) of target. In the real data measurement, the radar echo is acquired by the laboratory’s MIMO radar system. In addition, four quantitative metrics are used.

4.1. Simulation Experiment

The RCS characterizes radar’s scattering ability of electromagnetic waves. We calculate the target’s RCS as the amplitude of stepped-frequency signal and the BP imaging is performed after the radar echo is obtained. To represent the scattering strength and calculate the RCS of each part of the human body, we established an ellipsoid model as follows:
( Θ x Θ x c l a ) 2 + ( Θ y Θ y c l b ) 2 + ( Θ z Θ z c l c ) 2 = 1
where ( Θ x c , Θ y c , Θ z c ) denotes the center point coordinates of the human body joints and ( l a , l b , l c ) denotes the semi-axial length along the Θ x -axis, Θ y -axis and Θ z -axis. To simplify the ellipsoid model, let us set l a = l b . The approximate RCS of each part of the human is given as [54], for example, the procedure of getting the RCS of a human torso. The semi-axial length l c of the trunk ellipsoid along the Θ z -axis can be obtained according to root node coordinate and the thoracic node coordinate, the semi-axial length l a along the Θ x -axis and Θ y -axis can be obtained according to the equation of ellipsoid volume. Similarly, the ellipsoid parameters in all other human limbs can be obtained. The ellipsoid’s RCS of all human parts can be written as:
σ = 4 π l a 4 l c 2 [ ( 1 + c o s θ β c o s θ ) c o s ( ϕ ϕ β ) + s i n θ β s i n θ ] 2 [ l a 2 l c 2 ( s i n 2 θ + 2 s i n θ s i n θ β c o s ( ϕ ϕ β ) + s i n 2 θ β ) + ( c o s θ β + c o s θ ) 2 ] 2
where θ β and ϕ β denote the incident angles, θ and ϕ denote the reflex angles. The diagram of scattering model of the ellipsoid is shown in Figure 2.
We use the Kinect to get the key joints coordinates of human target. Kinect is a non-contact optical device that uses a natural user interface to interact with people without keyboard and mouse, which can get the 3-D coordinates of skeleton structure. Kinect is used in this paper to obtain the 3-D coordinates of 25 joints of the human target in real time with three postures: hands prolapse, shoulder abduction and hands held. In the experiment, the Kinect is placed at 1.5 m in front of the human target which is 1.75 m high. Figure 3 shows the experimental scene, the 3-D coordinates obtained by Kinect and the human ellipsoid model we established. Figure 4 shows the antenna array of the MIMO radar system and the spacing between each array element is 0.08 m. This system uses 10 transmitting antennas and 10 receiving antennas to generate 100 radar echo channels.
In this paper, the antenna array is designed to ensure the coherence of the observable channels and the consistency of the scattering property of each T/R channel pair. As shown in Figure 4b, virtual arrays generated by this antenna can approximately form a dense area array and improve the capability of the spatial sampling while saving the number of physical array elements [55].
Table 2 shows the radar system parameters. Simulation experiment and real data measurement are carried out according to the same parameters. We use the Wiener filtering, the Tikhonov algorithm, the TM algorithm, the LR algorithm, the LRMER algorithm, the LRTV algorithm and the CF weighted method to perform image enhancement. Additionally, Gaussian white noise is added to the signal and let the SNR of the signal equals to 10 dB and the regularization coefficient γ is 0.2. The azimuth-height of 3-D imaging results is analyzed.
Figure 5 shows the azimuth-height of 3-D imaging results with a heat map, where the red color refers to highly reflected power and the dark blue color presents non-reflection. Figure 5a,b shows that the BP imaging result matches the human ellipsoid model. There are more grating lobes because the target expansion widens the main lobe of each trajectory in BP imaging and raises the level of the grating lobe. Without changing the hardware conditions, the enhanced imaging can be used to control the grating lobes. Figure 5c shows that the Wiener filtering amplifies noise and grating lobes are undesirable because they produce artifacts and contaminate images. Figure 5d,e show that the Tikhonov algorithm and the TM algorithm can both improve the recognition degree of the imaging results but loss some human body parts in radar images. Figure 5f,g show that the LR algorithm and the LRMER algorithm can both suppress partial grating lobes but these algorithms cannot preserve edges of radar images and make the distortion of human body. The LRTV algorithm brings a much better recognition degree in Figure 5h,i, which not only makes the contour of the simulated the human target clear but also distinguish each part of the human body target. And the ability to denoise the LRTV algorithm is better than the LR algorithm. Figure 5j shows the result of the enhanced imaging result with the CF weighted method but weak energy is annihilated.
Figure 6 shows the image SNR graph and the curve of termination. The blue line represents the LR algorithm, the black line represents the LRTER algorithm and the red line represents the LRTV algorithm. It can be concluded from Figure 6a–c that the image SNR is higher when using the LRTV algorithm than using the LR algorithm and LRTER algorithm. Figure 6d–f illustrate the LRTV algorithm stops at a smaller number of iteration times, that is, the LRTV algorithm converges faster and is more efficient than the others.
In Table 3, H u B P denotes the image information entropy of the BP imaging result, H u W I E N E R denotes the image information entropy of the enhanced imaging result with the Wiener filtering, H u T i k h o n o v denotes the image information entropy of the enhanced imaging result with the Tikhonov algorithm, H u T M denotes the image information entropy of the enhanced imaging result with the TM algorithm, H u L R denotes the image information entropy of the enhanced imaging result with the LR algorithm, H u L R T E R denotes the image information entropy of the enhanced imaging result with the LRTER algorithm, H u L R T V denotes the image information entropy of the enhanced imaging result with the LRTV algorithm, I W I E N E R denotes the mutual information between the enhanced imaging result with the Wiener filtering and the degraded image, I T i k h o n o v denotes the mutual information between the enhanced imaging result with the Tikhonov algorithm and the degraded image, I T M denotes the mutual information between the enhanced imaging result with the TM algorithm and the degraded image, I L R denotes the mutual information between the enhanced imaging result with the LR algorithm and the degraded image, I L R T E R denotes the mutual information between the enhanced imaging result with the LRTER algorithm and the degraded image, I L R T V denotes the mutual information between the enhanced imaging result with the LRTV algorithm and the degraded image.
A quantitative analysis result for some similar enhanced imaging algorithms, shown in Table 3, illustrates that the entropy of the enhanced imaging result with the LRTV algorithm is less than with the LRTER algorithm, the LR algorithm, the TM algorithm, the Tikhonov algorithm, the Wiener filtering and the BP imaging results. We can see that the mutual information from the enhanced imaging result with the LRTV algorithm is higher than that of the other algorithms. These data illustrate that the proposed algorithm can perform a more similar result to the degraded image and maintain more information during the enhanced imaging procedure, that is, less information is lost. These results indicate that the LRTV algorithm outperforms the other similar algorithms.
In Table 4, S S I M W I E N E R denotes the SSIM between enhanced imaging result with the Wiener filtering and the degraded image, S S I M T i k h o n o v denotes the SSIM between enhanced imaging result with the Tikhonov algorithm and the degraded image, S S I M T M denotes the SSIM between enhanced imaging result with the TM algorithm and the degraded image, S S I M L R denotes the SSIM between enhanced imaging result with the LR algorithm and the degraded image, S S I M L R T E R denotes the SSIM between enhanced imaging result with the LRTER algorithm and the degraded image, S S I M L R T V denotes the SSIM between enhanced imaging result with the LRTV algorithm and the degraded image.
Table 4 shows the SSIM of images by three algorithms to appraise performance. We can see that the SSIM of enhanced imaging result with the LRTV algorithm is higher than that with the other similar algorithms. This illustrates the LRTV algorithm can better retain the structural information of target in imaging scene, that is, a complete imaging result is obtained.
Furthermore, another advantage of the proposed enhanced imaging method compared to the similar methods is that the LRTV algorithm improves the precision of an image under different noise levels. Figure 7 shows the RE performance of the methods with azimuth-height of 3-D imaging results for “Hands prolapse” at the different noise levels. The simulation results illustrate that the Wiener filtering, the Tikhonov algorithm, the TM algorithm, the LR algorithm and the LRTER algorithm have higher relative errors under different noise levels, while the LRTV algorithm is much better than the other traditional algorithms in terms of precision.

4.2. Real Data Measurement Experiment

We carry out the real data measurement by using a real MIMO radar. Figure 8 shows the MIMO radar system, which consists of the stepped-frequency transceiver, antenna system and signal processor. The radar system uses the ultra-wideband stepped-continuous wave signal and other parameters are consistent with the simulation experiment.
The experimental site is set in a free space and the noise is considered. The human target is 1.75 m high and stands 1.5 m in front of the radar system while posing 3 postures, respectively, which is shown in Figure 9.
We used the Wiener filtering, the Tikhonov algorithm, the TM algorithm, the LR algorithm, the LRTER algorithm, the LRTV algorithm and the CF weighted method to enhance the BP imaging results. The azimuth-height of 3-D imaging results is analyzed.
Figure 10a,b shows the imaging result by the real MIMO radar system. Due to the different RCS power of each part of the human body, the human body cannot be fully displayed (thighs, shanks, feet, etc.) in this experiment but a basic outline and main parts can be observed clearly. Figure 10c shows the enhanced imaging result by the Wiener filtering, this method produces artifacts and contaminate images. Figure 10d,e show the enhanced imaging result by the Tikhonov algorithm and the TM algorithms, respectively. The results of these two algorithms have a low identification degree because they cannot remove the noise well in radar images and the image edges are not smooth enough. Figure 10f,g,i,j show the enhanced imaging results of the LR algorithm, the LRTER algorithms and the LRTV algorithm, respectively. By comparing the three results we can see that the LRTV algorithm can better distinguish the arm from the trunk and get a clearer human contour. Figure 10h shows the result of CF weighted method. The CF Weighted method can remove the grating lobes but some details are annihilated. In conclusion, the proposed algorithm improves the imaging performance well compared to the other similar algorithms.
Figure 11 shows the curve of the image SNR and the curve of the termination condition in real data measurement. In Figure 11a–c, it can be concluded that the image SNR of enhanced imaging result with the LRTV algorithm is higher than with the other algorithms. Figure 11d–f illustrate that the curve of the LRTV algorithm stops decreasing after a smaller number of iterations, that is, the LRTV algorithm converges faster, which is consistent with that of the simulation. Figure 12 shows the RE performance of the methods with azimuth-height of 3-D imaging results for “Hands prolapse” in the different noise levels. The real data measurement results illustrate that the Wiener filtering, the Tikhonov algorithm, the TM algorithm, the LR algorithm and the LRTER algorithm have higher relative errors under different noise levels, while the LRTV algorithm is much better than the other traditional algorithms in terms of precision. This leads to a more stable solution to the deconvolution.
Table 5 shows the entropy of image by three enhanced imaging algorithms, the entropy of the enhanced imaging result with the LRTV algorithm is less than that with the other algorithms and the BP imaging result. In addition, the mutual information is computed between the degraded image and the enhanced image result by the Wiener filtering, the Tikhonov algorithm, the TM algorithm the LR algorithm, the LRMER algorithm and the LRTV algorithm, respectively. The mutual information from the enhanced imaging result with the LRTV algorithm is higher than with the other similar algorithms, that is, the result of the LRTV algorithm is more correlative to the result of BP imaging. Consequently, the LRTV algorithm performs better.
Table 6 shows the SSIM of image by different algorithms. We can see that the SSIM of enhanced imaging result with the LRTV algorithm is higher than that with the other algorithms. This illustrates the LRTV algorithm can better retain the structural information of the target.

4.3. The Proposed Algorithm in Complicated Scenario

Radar imaging and enhanced imaging are performed on two human body targets in order to verify the applicability of the proposed algorithm. In the simulation experiment, all parameters remain unchanged. The simulation scene is set as follows: the MIMO radar is placed at 1.5 m in front of the human targets which are 1.60 m and 1.75 m high, the first case is that the two human body targets stand close to one another and the second case is that the two human body targets stand far from each other.
First of all, the radar imaging result with two human body targets can be obtained by the BP algorithm. Then, we enhance the radar images only by use of the LR algorithm and the LRTV algorithm to verify the applicability of the proposed algorithm.
In Figure 13, it can be seen that the LR algorithm and the proposed algorithm can enhance the BP imaging result when the two human body targets are standing close by. Specifically, not only can the proposed algorithm obtain a clearer human contour compared to the LR algorithm but it can also suppress noise well. As a result, the proposed algorithm for enhanced imaging with one human body target also applies to more people. We should notice that the BP imaging result get the phenomenon of azimuth ambiguity when the two human body targets are standing far from each other. Due to the influence of the angular resolution of the MIMO radar system, the imaging result will be distorted and get azimuth ambiguity.
The simulation experiment shows that the proposed algorithm can enhance the imaging result in a complicated scenario. Therefore, we believe that the proposed algorithm for enhancing radar images has good performance in real applications.

5. Conclusions

Radar imaging methods have attracted a lot of attention in recent years. However, identification and imaging performance are limited in many different applications. Many factors affect the imaging results and the analysis of the target at a later stage, such as quality of radar imaging algorithms, arrangement of antenna arrays, parameters of hardware and the inevitable external conditions. In this paper, an enhanced imaging method based on the LR algorithm and the TV regularization is proposed. Compared to similar conventional algorithms, the LRTV algorithm performs better visual identification because it can effectively reduce the influence of the grating lobes and suppress noise amplification. Meanwhile, the LRTV algorithm better increases the image SNR, converges fast, has a higher SSIM (retain the structural information of the target well) and higher accuracy compared to other similar methods. In addition, the experiment result shows that the LRTV algorithm loses less information after the enhanced imaging processing and can better obtain the complete human contours and clear human limbs. In the end, that the proposed algorithm can enhance the imaging results in a complicated scenario is verified by simulation experiment.

Author Contributions

Each author contributes extensively to the preparation of this manuscript. D.Z.: Literature search, figure, study design, data collection, data analysis, writing; T.J.: Study design, data analysis; Y.D.: Study design, data collection; Y.S.: Study design, data collection; X.S.: Data analysis, writing. All authors participated in the discussion about the proposal and contributed to the analysis of the results.

Funding

This work was funded by the National Natural Science Foundation of China, grant numbers 61271441 and 61372161.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sherwin, C.W.; Ruina, J.P.; Raweliffe, R.D. Some Early Developments in Synthetic Aperture Radar System. IEEE Trans. Mil. Electron. 1962, 6, 111–115. [Google Scholar] [CrossRef]
  2. Wiley, C.A. Synthetic Aperture Radar. IEEE Trans. Aerosp. Electron. Syst. 1985, 21, 440–443. [Google Scholar] [CrossRef]
  3. Bartoletti, S.; Giorgetti, A.; Win, M.Z.; Conti, A. Blind Selection of Representative Observations for Sensor Radar Networks. IEEE Trans. Veh. Tech. 2015, 64, 1388–1400. [Google Scholar] [CrossRef]
  4. Bartoletti, S.; Conti, A.; Giorgetti, A.; Win, M.Z. Sensor Radar Networks for Indoor Tracking. IEEE Wirel. Commun. Lett. 2014, 3, 157–160. [Google Scholar]
  5. Salman, R.; Willms, I.; Sakamoto, T.; Sato, T.; Yarovoy, A. Environmental imaging with a mobile UWB security robot for indoor localisation and positioning applications. In Proceedings of the IEEE European Microwave Conference, Nuremberg, Germany, 6–10 October 2013; pp. 331–334. [Google Scholar]
  6. Anabuki, M.; Okumura, S.; Sato, T.; Sakamoto, T.; Saho, K.; Yoshioka, M. Ultra-wideband Radar Imaging Using Adaptive Array and Doppler Separation. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 190–200. [Google Scholar] [CrossRef]
  7. Chiani, M.; Giorgetti, A.; Paolini, E. Sensor Radar for Object Tracking. Proc. IEEE 2018, 106, 1022–1041. [Google Scholar] [CrossRef]
  8. Sakamoto, T.; Sato, T.; Aubry, P.; Yarovoy, A. Fast imaging method for security systems using ultrawideband radar. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 658–670. [Google Scholar] [CrossRef]
  9. WeiB, M.; Peters, O.; Ender, J. A three-dimensional SAR system on an UAV. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, IGARSS, Barcelona, Spain, 23–28 July 2007; pp. 5315–5318. [Google Scholar]
  10. Rabideau, D.J.; Parker, P. Ubiquitous MIMO multifunction digital array radar. In Proceedings of the 37th Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 1057–1064. [Google Scholar]
  11. Fishler, E.; Haimovich, A.M.; Blum, R.S.; Chizhik, D.; Cimini, L.; Valenzuela, R. MIMO radar: An idea whose time has come. In Proceedings of the IEEE Radar Conference, Philadelphia, PA, USA, 29 April 2004; pp. 71–78. [Google Scholar]
  12. Blisand, D.W.; Forsythe, K.W. Multiple-Input Multiple-Output Radar and Imaging: Degrees of Freedom and Resolution. In Proceedings of the Asilomar Conference Signals, Systems & Computers, Pacific Grove, CA, USA, 9–12 November 2003; pp. 54–59. [Google Scholar]
  13. Robey, F.C.; Coutts, S.; Weikle, D.; McHarg, J.C.; Cuomo, K. MIMO Radar Theory and Experimental Results. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 7–10 November 2004; pp. 300–305. [Google Scholar]
  14. Lehmann, N.H.; Fishler, E.; Haimovich, A.M.; Blum, R.S.; Chizhik, D.; Cimini, L.J.; Valenzuela, R.A. Evaluation of transmit diversity in MIMO-radar direction finding. IEEE Trans. Signal Process. 2007, 55, 2215–2225. [Google Scholar] [CrossRef]
  15. Bamler, R.; Breit, H.; Steinbrecher, U.; Just, D. Algorithms for X-SAR Processing. Int. Geosci. Remote Sens. Symp. 1993, 4, 1589–1952. [Google Scholar]
  16. Raney, R.K.; Runge, H.; Bamler, R.; Cumming, I.G.; Wong, F.H. A novel high precision SAR processing using chirp scaling. IEEE Trans. Geosci. Remote Sens. 1994, 32, 786–799. [Google Scholar] [CrossRef]
  17. Cafforio, C.; Prati, C.; Rocca, F. SAR data focusing using seismic migration techniques. IEEE Trans. Aerosp. Electron. Syst. 1991, 27, 194–207. [Google Scholar] [CrossRef]
  18. Zhang, L.; Li, H.; Qiao, Z.; Xing, M.; Bao, Z. Integrating autofocus techniques with fast factorized back-projection for high-resolution spotlight SAR imaging. IEEE Geosci. Remote Sens. Let. 2013, 10, 1394–1398. [Google Scholar] [CrossRef]
  19. Sakamoto, T.; Sato, T.; Aubry, P.; Yarovoy, A. Fast and accurate UWB radar imaging using hybrid of Kirchhoff migration and Stolt’s F-K migration with inverse boundary scattering transform. In Proceedings of the 2014 IEEE International Conference on Ultra-WideBand (ICUWB), Paris, France, 1–3 September 2014; pp. 191–196. [Google Scholar]
  20. Zhang, Z.; Buma, T. Terahertz impuilse imaging with sparse arrays and adaptive reconstruction. IEEE J. Sel. Top. Quantum Electron. 2011, 17, 169–176. [Google Scholar] [CrossRef]
  21. Wehner, D.R. High Resolution Radar, 2nd ed.; Artech House: Boston, MA, USA, 1995. [Google Scholar]
  22. Yoon, Y.; Amin, M.G. High-resolution through-the-wall radar imaging using beamspace MUSIC. IEEE Trans. Antennas Propag. 2008, 56, 1763–1774. [Google Scholar] [CrossRef]
  23. Vignon, F.; Burcher, M.R. Capon beamforming in medical ultrasound imaging with focused beams. IEEE Ultrason. Ferroelectr. Freq. Control 2008, 55, 619–628. [Google Scholar] [CrossRef] [PubMed]
  24. Candés, E.; Romberg, J.K.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Intell. Transp. Syst. 2006, 52, 489–509. [Google Scholar] [CrossRef]
  25. Richardson, W.H. Bayesian-based iterative method of image restoration. JOSA 1972, 79, 55–59. [Google Scholar] [CrossRef]
  26. Kirsch, A. An Introduction to the Mathematical Theory of Inverse Problem; Springer: New York, NY, USA, 1996. [Google Scholar]
  27. Shah, N.R.; Zakhor, A. Resolution enhancement of color video sequences. IEEE Trans. Image Process. 1999, 8, 879–885. [Google Scholar] [CrossRef] [PubMed]
  28. Petrou, M.; Bosdogianni, P. Image Processing: The Fundamentals; Wiley: New York, NY, USA, 1999. [Google Scholar]
  29. Wu, W.; Kundu, A. Image estimation using fast modified reduced update Kalman filter. IEEE Trans. Signal Process. 1992, 40, 915–926. [Google Scholar]
  30. Hansen, P.C. Truncated SVD solutions to ill-determined numerical rank. SIAM J. Sci. Stat. Comput. 1990, 11, 503–518. [Google Scholar] [CrossRef]
  31. Li, X.; Orchard, M.T. New edge-directed interpolation. IEEE Trans. IEEE Trans. Image Process. 2001, 10, 1521–1527. [Google Scholar] [PubMed] [Green Version]
  32. Candlneia, F.M.; Principle, J.C. Super-resolution of images based on local correlations. IEEE Trans. Neural Netw. 1999, 10, 372–380. [Google Scholar]
  33. Lucy, L. An iterative technique for the rectification of observed distributions. Astron. J. 1974, 79, 745. [Google Scholar] [CrossRef]
  34. Katsaggelos, A.K. Digital Image Restoration; Springer: Berlin, Germany, 1991. [Google Scholar]
  35. Karayiannis, N.B.; Venetsanopoulos, A.N. Regularization Restoration-The Stablizing Functional Approach. IEEE Trans. Intell. Transp. Syst. 1990, 38, 1155–1178. [Google Scholar]
  36. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  37. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Winston and Sons.: Washington, DC, USA, 1977. [Google Scholar]
  38. Lagendijk, R.L.; Tekalp, A.M.; Biemond, J. Maximum likelihood image and blur identification. Opt. Eng. 1990, 29, 422–435. [Google Scholar]
  39. You, Y.L.; Kaveh, M. A regularization approach to joint blur identification and image restoration. IEEE Trans. Intell. Transport. Syst. 1996, 5, 416–418. [Google Scholar]
  40. Faramarzif, E.; Rajan, D.; Christensen, M.P. Unified Blind Method for Multi-Image Super-resolution and Single/Multi Image Blur Deconvolution. IEEE Trans. Signal Process. 2013, 22, 2101–2114. [Google Scholar]
  41. Boutet de Monvel, J.; Le Calvez, S.; Ulfendahl, M. Image restoration for confocal microscopy: Improving the limits of deconvolution, with application to the visualization of the mammalian hearing organ. Biol. J. 2001, 80, 2455–2470. [Google Scholar] [CrossRef]
  42. Boutet de Monvel, J.; Scarfone, E.; Le Calvez, S.; Ulfendahl, M. Image-adaptive deconvolution for three-dimensional deep biological imaging. Biol. J. 2003, 85, 3991–4001. [Google Scholar] [CrossRef]
  43. Asib, F.; Hsu, C.; Mao, H.; Katabi, D.; Durand, F. Capturing the Human Figure Through a Wall. ACM Trans. Gr. 2015, 34, 219–231. [Google Scholar]
  44. Kempen van, G.M.P. Image Restoration in Fluorescence Microscopy. Ph.D. Thesis, Technische Universities Delft-Holland, Delft, The Netherlands, 1 January 1999. [Google Scholar]
  45. Kempen van, G.M.P.; Vliet van, L.J. The influence of the regularization parameter and the first estimate on the performance of Tikhonov regularized non-linear image restoration algorithm. J. Microsc. 2000, 198, 63–75. [Google Scholar] [CrossRef]
  46. Yan, L.; Fang, H.; Zhong, S. Blind image deconvolution with spatially adaptive total variation regularization. Opt. Lett. 2012, 37, 2778–2780. [Google Scholar]
  47. Hermen, G.T. Image Reconstruction from Projection; Academic Press Inc.: Orlando, FL, USA, 1980; Chapter 7. [Google Scholar]
  48. McCorkle, J.W. Focusing of synthetic aperture ultra-wideband data. In Proceedings of the IEEE International Conference on Systems Engineering, Fairborn, OH, USA, 1–3 August 1991; pp. 1–5. [Google Scholar]
  49. Zhang, Y.J. Imaging Processing and Analysis; Tsinghua University Press: Beijing, China, 1998. [Google Scholar]
  50. Lu, B.Y.; Zhao, Y.; Sun, X.; Zhou, Z.M. Design and Analysis of Ultra-Wide Band Split Transmit Virtual Aperture Array for Trough the Wall imaging. Int. J. Antennas Propag. 2013. [Google Scholar] [CrossRef]
  51. Sarder, P.; Nehorai, A. Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Process. Mag. 2006, 23, 32–45. [Google Scholar] [CrossRef]
  52. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  53. Zhou, W.; Bovik, A.C. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  54. Drillis, R.; Contini, R. Body segment parameters. Artif. Limbs 1964, 8, 44–66. [Google Scholar] [PubMed]
  55. Bekkerman, I.; Tabrikian, J. Target detection and localization using MIMO radars and sonars. IEEE Trans. Signal Process. 2006, 54, 3838–3873. [Google Scholar] [CrossRef]
Figure 1. (a) Diagram of Multiple-Input Multiple-Output (MIMO) radar system and geometric schematic diagram of back projection (BP) imaging; (b) Diagram of the stepped-frequency signal.
Figure 1. (a) Diagram of Multiple-Input Multiple-Output (MIMO) radar system and geometric schematic diagram of back projection (BP) imaging; (b) Diagram of the stepped-frequency signal.
Electronics 07 00101 g001
Figure 2. Scattering model of the ellipsoid.
Figure 2. Scattering model of the ellipsoid.
Electronics 07 00101 g002
Figure 3. (a) Experiment scene; (b) The 3-D coordinates obtained by Kinect; (c) The human ellipsoid model.
Figure 3. (a) Experiment scene; (b) The 3-D coordinates obtained by Kinect; (c) The human ellipsoid model.
Electronics 07 00101 g003
Figure 4. (a) Antenna array; (b) Virtual array elements.
Figure 4. (a) Antenna array; (b) Virtual array elements.
Electronics 07 00101 g004
Figure 5. (a) BP imaging result; (b) Registration figure for BP imaging; (c) Enhanced imaging result with Wiener filtering; (d) Enhanced imaging result with Tikhonov regularization; (e) Enhanced imaging result with Tikhonov-Miller (TM) regularization; (f) Enhanced imaging result with LR algorithm; (g) Enhanced imaging result with LRMER algorithm; (h) Coherence Factor (CF) weighted method; Enhanced imaging result with LRTV algorithm; (i) Enhanced imaging result with LRTV algorithm; (j) Registration figure for LRTV algorithm.
Figure 5. (a) BP imaging result; (b) Registration figure for BP imaging; (c) Enhanced imaging result with Wiener filtering; (d) Enhanced imaging result with Tikhonov regularization; (e) Enhanced imaging result with Tikhonov-Miller (TM) regularization; (f) Enhanced imaging result with LR algorithm; (g) Enhanced imaging result with LRMER algorithm; (h) Coherence Factor (CF) weighted method; Enhanced imaging result with LRTV algorithm; (i) Enhanced imaging result with LRTV algorithm; (j) Registration figure for LRTV algorithm.
Electronics 07 00101 g005aElectronics 07 00101 g005b
Figure 6. (a) Image signal-to-noise ratio (SNR) curve of LR, LRTER and LRTV when “hands prolapse”; (b) Image SNR curve of LR, LRTER and LRTV when “shoulder abduction”; (c) Image SNR curve of LR, LRTER and LRTV when “hands held”; (d) Termination condition of iteration of LR, LRTER and LRTV when “hands prolapse”; (e) Termination condition of iteration of LR, LRTER and LRTV when “shoulder abduction”; (f) Termination condition of LR, LRTER and LRTV when “hands held”.
Figure 6. (a) Image signal-to-noise ratio (SNR) curve of LR, LRTER and LRTV when “hands prolapse”; (b) Image SNR curve of LR, LRTER and LRTV when “shoulder abduction”; (c) Image SNR curve of LR, LRTER and LRTV when “hands held”; (d) Termination condition of iteration of LR, LRTER and LRTV when “hands prolapse”; (e) Termination condition of iteration of LR, LRTER and LRTV when “shoulder abduction”; (f) Termination condition of LR, LRTER and LRTV when “hands held”.
Electronics 07 00101 g006
Figure 7. Relative error (RE) curve comparison under different noise levels.
Figure 7. Relative error (RE) curve comparison under different noise levels.
Electronics 07 00101 g007
Figure 8. (a) Structure of MIMO radar; (b) Stepped-frequency transceiver.
Figure 8. (a) Structure of MIMO radar; (b) Stepped-frequency transceiver.
Electronics 07 00101 g008
Figure 9. The scene of real data measurement.
Figure 9. The scene of real data measurement.
Electronics 07 00101 g009
Figure 10. (a) BP imaging result; (b) Registration figure for BP imaging; (c) Enhanced imaging result with Wiener filtering; (d) Enhanced imaging result with Tikhonov regularization; (e) Enhanced imaging result with TM regularization; (f) Enhanced imaging result with LR algorithm; (g) Enhanced imaging result with LRMER algorithm; (h) CF weighted method; Enhanced imaging result with LRTV algorithm; (i) Enhanced imaging result with LRTV algorithm; (j) Registration figure for LRTV algorithm.
Figure 10. (a) BP imaging result; (b) Registration figure for BP imaging; (c) Enhanced imaging result with Wiener filtering; (d) Enhanced imaging result with Tikhonov regularization; (e) Enhanced imaging result with TM regularization; (f) Enhanced imaging result with LR algorithm; (g) Enhanced imaging result with LRMER algorithm; (h) CF weighted method; Enhanced imaging result with LRTV algorithm; (i) Enhanced imaging result with LRTV algorithm; (j) Registration figure for LRTV algorithm.
Electronics 07 00101 g010
Figure 11. (a) Image SNR curve of LR, LRTER and LRTV when “hands prolapse”; (b) Image SNR curve of LR, LRTER and LRTV when “shoulder abduction”; (c) Image SNR curve of LR, LRTER and LRTV when “hands held”; (d) Termination condition of iteration of LR, LRTER and LRTV when “hands prolapse”; (e) Termination condition of iteration of LR, LRTER and LRTV when “shoulder abduction”; (f) Termination condition of LR, LRTER and LRTV when “hands held”.
Figure 11. (a) Image SNR curve of LR, LRTER and LRTV when “hands prolapse”; (b) Image SNR curve of LR, LRTER and LRTV when “shoulder abduction”; (c) Image SNR curve of LR, LRTER and LRTV when “hands held”; (d) Termination condition of iteration of LR, LRTER and LRTV when “hands prolapse”; (e) Termination condition of iteration of LR, LRTER and LRTV when “shoulder abduction”; (f) Termination condition of LR, LRTER and LRTV when “hands held”.
Electronics 07 00101 g011
Figure 12. RE curve comparison under different noise levels.
Figure 12. RE curve comparison under different noise levels.
Electronics 07 00101 g012
Figure 13. The enhanced imaging methods are used in complicated scenarios.
Figure 13. The enhanced imaging methods are used in complicated scenarios.
Electronics 07 00101 g013
Table 1. The detail implementation steps of the LRTV algorithm.
Table 1. The detail implementation steps of the LRTV algorithm.
LRTV Algorithm
Input:   g Degraded image & iterative initial value
      H PSF
      n White Gaussian noise
     kNumber of iterations
      γ TV regularized coefficient
Output:   f Solution of enhanced iterative equation
Begin
Step 1
Plug g and H into the degradation model of imaging system.
Step 2
Plug g = H f + n into the probability density function of Poisson distribution.
Step 3
Calculate the logarithm on both sides of the result in Step 2.
Step 4
Let E o ( f ) denotes the result of Step 3.
Step 5
Let f M L denotes Maximum likelihood estimation of the E o ( f ) about f . Calculate f M L and then find the extremum according to the equation: f M L = arg min E o ( f ) .
Step 6
From the function in Step 5, calculate the partial derivatives of E o ( f ) about f to obtain the extremum.
Step 7
Let E f be the fidelity term, which is equal to the iterative expression obtained by Step 6.
Step 8
Calculate the image curvature of the iterative initial value.
Step 9
Construct a regularized cost function.
Step 10
Add the regularization term to the iteration expression in Step 7 to get the iterative expression.
Iterative expression: f k + 1 = { [ H T g + n ( f k H ) ] } f k 1 γ d i v ( f k f k )
End
Table 2. Parameters of simulation experiment.
Table 2. Parameters of simulation experiment.
ParameterValue
Center frequency1.96 GHz
Stepped-frequency4 MHz
Bandwidth600 MHz
Transmitting terminal10
Receiving terminal10
Table 3. Image information entropy and mutual information with different algorithms in simulation.
Table 3. Image information entropy and mutual information with different algorithms in simulation.
Hands ProlapseShoulder AbductionHands Held
H u B P 3.29723.25403.2365
H u W I E N E R 3.17843.14132.9540
H u T i k h o n o v 3.09563.07412.9326
H u T M 2.95642.91132.8978
H u L R 2.88392.87742.6593
H u L R T E R 2.81152.80292.5649
H u L R T V 2.78532.74122.4231
I W I E N E R 0.36520.31490.3497
I T i k h o n o v 0.39910.36280.3965
I T M 0.42150.41190.4265
I L R 0.46940.46900.4473
I L R T E R 0.47890.48570.4613
I L R T V 0.48300.49840.4878
Table 4. Structural similarity (SSIM) with different algorithms in simulation.
Table 4. Structural similarity (SSIM) with different algorithms in simulation.
Hands ProlapseShoulder AbductionHands Held
S S I M W I E N E R 0.56480.54790.5218
S S I M T i k h o n o v 0.62580.61490.6356
S S I M T M 0.63240.64150.6578
S S I M L R 0.78450.79410.7459
S S I M L R T E R 0.80250.81470.7614
S S I M L R T V 0.85460.84150.7889
Table 5. Image information entropy and mutual information with different algorithms in real data measurement.
Table 5. Image information entropy and mutual information with different algorithms in real data measurement.
Hands ProlapseShoulder AbductionHands Held
H u B P 4.50234.40474.5085
H u W I E N E R 3.31463.98503.6108
H u T i k h o n o v 3.29563.96653.5471
H u T M 3.25493.93173.2996
H u L R 3.24723.92723.0033
H u L R T E R 3.20693.84972.9413
H u L R T V 3.16893.65482.8794
I W I E N E R 0.47890.54120.4895
I T i k h o n o v 0.49780.51490.4978
I T M 0.52110.89790.5217
I L R 0.57350.61100.5712
I L R T E R 0.57930.61480.5959
I L R T V 0.58580.62610.6583
Table 6. The SSIM with different algorithms in real data measurement.
Table 6. The SSIM with different algorithms in real data measurement.
Hands ProlapseShoulder AbductionHands Held
S S I M W I E N E R 0.41560.45290.4781
S S I M T i k h o n o v 0.49140.47320.4963
S S I M T M 0.52480.54890.5367
S S I M L R 0.65140.63170.6849
S S I M L R T E R 0.75690.69540.6958
S S I M L R T V 0.76940.72580.7157

Share and Cite

MDPI and ACS Style

Zhao, D.; Jin, T.; Dai, Y.; Song, Y.; Su, X. A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar. Electronics 2018, 7, 101. https://doi.org/10.3390/electronics7070101

AMA Style

Zhao D, Jin T, Dai Y, Song Y, Su X. A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar. Electronics. 2018; 7(7):101. https://doi.org/10.3390/electronics7070101

Chicago/Turabian Style

Zhao, Dizhi, Tian Jin, Yongpeng Dai, Yongping Song, and Xiangchenyang Su. 2018. "A Three-Dimensional Enhanced Imaging Method on Human Body for Ultra-Wideband Multiple-Input Multiple-Output Radar" Electronics 7, no. 7: 101. https://doi.org/10.3390/electronics7070101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop