Next Article in Journal
Information Potential Fields Navigation in Wireless Ad-Hoc Sensor Networks
Previous Article in Journal
A Secured Authentication Protocol for Wireless Sensor Networks Using Elliptic Curves Cryptography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Sensors 2011, 11(5), 4780-4793; https://doi.org/10.3390/s110504780
Submission received: 14 February 2011 / Revised: 11 April 2011 / Accepted: 12 April 2011 / Published: 2 May 2011
(This article belongs to the Section Physical Sensors)

Abstract

: Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method.

1. Introduction

Direction of arrival (DOA) estimation has long been a useful method for signal detection in sonar, radar and communication applications [1,2]. Subspace-based methods such as minimum variance distortionless response (MVDR) and multiple signal classification (MUSIC) [3,4] require sufficient stationary snapshots to guarantee the high-resolution estimation performance. These methods exploit the orthogonality between the signal and noise subspaces to achieve high-resolution spectrum estimation. In addition, calibration techniques are added to improve the performance in the gain/phase error scenario [1,2]. However, even with appropriate calibration, subspace-based methods are unable to deal with the coherent signal sources because the statistical properties, i.e., the subspace orthogonality cannot provide useful information for separating coherent sources [3,4]. In addition, sufficient snapshots are often unavailable in fast-changing scenarios, which results in inappropriate estimation of the subspaces, and thus, the performance of the DOA estimation is also degraded. Focusing on the problems of coherent sources and the requirements for sufficient stationary snapshots, the sparse representation (SR) method is proposed [5,6]. The key assumption is that the signal sources can be viewed as far-field point sources, and their number is quite small compared with the whole spatial domain. When this assumption is valid, the underlying spatial spectrum is sparse (i.e., has only a few nonzero elements), and we can solve the inverse problem with sparse constraint to approximate the actual sparse signal. Additionally, SR has also been widely used in a variety of other problems, including image reconstruction [7,8], feature selection [9] in machine learning, radar imaging [10,11], and penalized regression [12,13]. In the most basic form, SR attempts to find the sparsest signal α satisfying x = Φα, where ΦCm×n is an overcomplete basis, i.e., mn and x is the observation data. Without the prior knowledge that α is sparse, the equation x = Φα is ill-posed and has many solutions. Additional information that α should be sparse allows one to eliminate this ill-posedness [1416]. Solving the ill-posed problem involving sparsity typically requires combinatorial optimization, which is intractable even for modest data size. A number of practical algorithms such as convex optimization (including the L1 norm minimization) [5] and iterative reweighted least squares [6] have been proposed to approximate the actual solution to this problem. However, in the actual array scenario, the unknown gain/phase error between sensors is inevitable. At this case, a mismatch exists between the actual array manifold and the corresponding columns of the predefined basis, which causes performance degradation in DOA estimation [5]. Therefore it is indispensable to design an adaptive SR algorithm where the overcomplete basis is dynamically adjusted to better fit the received data.

In this paper, an adaptive SR algorithm to dynamically adjust both the overcomplete basis and the sparse solution so that the solution can better match the actual scenario is proposed. The remainder of this paper is organized as follows. Section 2 describes the basic model of the received data. In Section 3, an adaptive SR method to deal with the gain/phase error scenario is illustrated. In Section 4, the performance analysis is implemented to illustrate the robustness of adaptive SR with the simulated data. Section 5 presents our concluding remarks about the proposed algorithm.

2. Problem Description and Modeling

Source localization using sensor arrays is a problem with important practical applications including radar, sonar, exploration seismology and many other applications [1,2]. In many source localization applications, the physical dimensions of the sources are quite small or the sources are far enough from the array sensors so that they can be viewed as far-field point sources. Although the non-uniform and nonlinear configurations such as conformal array sensors (CFA) have certain advantages over uniform linear array (ULA), in this paper, the discussion is just implemented with the widely-used ULA deployment for simplicity. Next the signal model received by ULA is first given.

2.1. Signal Model

As shown in Figure 1, the array geometry is assumed to be ULA with N sensors, labeled as xi (t), 1 ≤ iN, where t and i indicate the snapshot and sensor indexes, respectively. The inter-sensor spacing was d, the radar wavelength is λ and the incoming far-field point sources are sk (t), 1 ≤ kK, where K indicates the number of sources, which is less than the number of sensors.

Starting with the ideal model with no gain/phase error, we have x = Ψs + n, where x(N × 1) represents one received snapshot, n(N × 1) is the noise vector, s(K × 1) is the source vector, and the matrix Ψ(N × K) is the steering vectors of the actual sources, i.e., the array manifold as [1,2]:

Ψ = [ exp ( d λ sin θ 1 0 ) , exp ( d λ sin θ 2 0 ) , exp ( d λ sin θ K 0 ) exp ( d λ sin θ 1 1 ) , exp ( d λ sin θ 2 1 ) , exp ( d λ sin θ K 1 ) exp ( d λ sin θ 1 ( N 1 ) ) , exp ( d λ sin θ 2 ( N 1 ) ) , exp ( d λ sin θ K ( N 1 ) ) ] ,
where the vector:
s ( θ k ) = [ exp ( d λ sin θ k 0 ) , exp ( d λ sin θ k 1 ) , exp ( d λ sin θ k ( N 1 ) ) ] ,
indicates the steering vector of the actual source with angle θk. Once the actual array manifold Ψ is known, the technique of data fitting can be used to estimate the signal amplitudes of the actual sources [1,2]. However, in the actual radar array environment, the actual manifold is unavailable and needs to be estimated. To avoid this problem, we deign an overcomplete basis containing all the steering vectors. Then the spectrum estimation can be implemented by solving the underdetermined equation instead of finding the actual array manifold. Discretize the angle axis into Ns = ρsN (ρs ≫ 1) grids so that ϕ i = 2 π i N s , 1 i N s denotes the uniformly-discretized angles. Then the N × Ns overcomplete basis is given as [5]:
Φ = [ s ( ϕ 1 ) , , s ( ϕ 2 ) , , s ( ϕ N s ) ] ,
where s (ϕi) is the steering vector corresponding to angle ϕi. Then the snapshot x can be rewritten in matrix form as:
x = Φ α + n ,
where α(Ns × 1) represents for the actual spectral distribution. The actual array manifold Ψ corresponds to the steering vectors of the significant elements in α, and ideally, is the subset of the overcomplete basis Φ. Therefore finding the actual array manifold is equal to picking up the corresponding columns from the overcomplete basis. Because Ns > N, the underdetermined problem of solving α in (4) is generally ill-posed. Prior works have illustrated that with the additional information that the spatial spectrum, i.e., the solution α is sparse, this ill-posedness can be effectively removed [5,6]. Solving problems involving sparsity typically requires combinatorial optimization, which is intractable even for modest data sizes, therefore, a number of approximations have been considered [14,15]. Next we give a brief synopsis of relevant ideas in sparse representation.

2.2. Sparse Representation

Recently, the techniques of SR have been illustrated as effective methods for DOA estimation [5,6]. The SR technique, by its nature, can separate coherent sources because the spectrum estimation is based on the optimization technique, but not on subspace orthogonality. Moreover, when multiple stationary snapshots are available, further improvements on estimation performance are expected with the “joint-sparse” characteristic [5,6].

2.2.1. Single Snapshot Case

With the constraint of sparsity on α (only a small subset is nonzero), the problem in (4) can be efficiently solved by SR [5] as:

α ^ = arg min || α || 1 subject to || x Φ α || 2 ɛ ,
where ‖·‖p stands for the Lp norm and ɛ is the error allowance in sparse representation. During the optimization, the L2 norm constraint by ɛ guarantees the residual ‖xΦα2 to be small, whereas the L1 norm enforces the sparsity of the estimated spectrum α. In fact, the exact sparsity, i.e., the number of the nonzero elements should be originally given by the L0 norm. However, this optimization is NP-hard and is unrealizable even for modest data size [14,15]. Unlike the L0 norm, the L1 norm minimization can be efficiently implemented via convex optimization. The fundamental contribution of SR is to illustrate the equivalence between these two optimizations. It is proven that SR implemented by the L1 norm minimization can approximate the actual solution as ‖α̂α02 ≤ Λ · ɛ, where α0 indicates the actual sparse solution, Λ is the stability coefficient related to the maximal mutual coherence in the matrix Φ [15]. The detailed illustration of the L1 norm characteristic is given in [16]. Therefore SR has the ability of high-resolution estimation. Furthermore, when several stationary snapshots are available, we can combine these snapshots to improve the estimation performance.

2.2.2. Multiple Snapshots Case

When multiple measurements are available, the data model is extended as:

X = Φ S + N ,
where X = [x(1),⋯,x(L)] are multiple snapshots, N = [n(1),⋯,n(L)] and S = [α(1),⋯,α(L)] are the corresponding noise and spectrum matrixes, respectively. The rows of S indicate the spatial dimension and the columns indicate the temporal dimension. One natural approach using multiple snapshots is to exploit the joint sparse representation characteristic, which assumes that the positions of the significant sources keep identical among different snapshots and the difference is only reflected on their amplitude variations. Chen et al. proposed the mixed L1,2 norm minimization to implement the joint optimization [1719].

The mixed L1,2 norm minimization is implemented on the solution matrix S, with the definition as 1 / L j = 1 L ( i = 1 N | S i , j | ) 2.

Based on this, the L1,2 norm minimization combines the multiple snapshots using the L2 norm and the sparsity is only enforced in the spatial dimension via the L1 norm. Therefore the solution matrix S is parameterized temporally and spatially, but the sparse constraint has only been enforced in spatial dimension because the signal is not generally sparse in temporal domain. However, this joint optimization is quite complicate and has a huge computation load. When the number of the snapshots L increases, the required computational effort increases superlinearly. Therefore, when the number of the snapshots is large, this approach is not practical for real-time source localization.

A. Noncoherent Average

To decrease the computation load, a simple method is to separate the joint problem in (6) into a series of independent subproblems [5] as:

x ( l ) = Ψ α ( l ) + n ( l ) , 1 l L .

Each subproblem can be solved via the L1 norm minimization using (5) to obtain the sparse spectrum estimation. Then, the average result of these estimated spectrums α̂(l), 1 ≤ lL can be taken as:

α ^ = 1 L l = 1 L | α ^ ( l ) | .

This method implements noncoherent average and its main attraction is its simplicity. However, by turning to fully coherent combined processing, as described in the following sections, we expect to achieve greater accuracy and robustness to noise.

B. L1-SVD

A typical coherent sparse representation algorithm using multiple snapshots is the ℓ1-SVD method [5,20]. It implements the sparse estimation only in the signal subspace, and thus, the robustness to noise is improved and the computation load of the optimization is quite low. In its basic form, the received data is decomposed into the signal and noise subspaces using the singular value decomposition (SVD) of the N × L data matrix X. Then the spectrum estimation is molded with reduced dimension only in the signal subspace. Mathematically, this translates into the following representation. Take SVD of the data matrix as:

X = U L V H ,
where the diagonal entries of L indicate the singular values of X, the columns of U and V are left- and right-singular vectors, respectively. Suppose the number of actual sources is K (KL), the reduced dimension N × K matrix denotes the signal subspace as XSV = ULDK = XVDK, where DK = [IK, 0]. Obtain SSV = SVDK and NSV = NVDK similarly, and then, the data can be molded in the signal subspace as:
X S V = A S S V + N S V .

Then the L1 norm minimization can be similarly implemented like (5), however, only in the signal subspace. In the ℓ1-SVD method, the noise level is reduced and the spectrum estimation is improved. In addition, the size of the joint optimization is reduced from N × L into N × K, and thus, the computation load is greatly reduced. The simulation results in [5] illustrate that ℓ1-SVD has the advantages of both lower computation load and more robustness to the noise. Therefore, in the simulation part of this paper, the ℓ1-SVD method is chosen as a performance reference.

However, there are some nonideal factors, which is inevitable in a practical radar array system. These factors include gain/phase error, mutual coupling between sensors and so forth [1,2]. When these happen, the predefined overcomplete basis in SR cannot effectively express the actual array manifold, which causes performance degradation in spectrum estimation. Similar problems also appear in other spectrum estimation methods like MVDR and MUSIC [3,4]. In this paper, we only focus on the gain/phase error scenario and propose an effective method to adaptively calibrate the overcomplete basis so that the robustness of the spectrum estimation is improved. Similar treatments can also be made to deal with the mutual coupling scenario, however, the optimization procedure is more complicate. Without considering the mutual coupling between sensors, the error matrix can be given as [2123]:

Γ = diag ( Δ a 1 e j Δ θ 1 , , Δ a N e j Δ θ N ) ,
where ΔaiejΔθi indicates the gain/phase error at the ith sensor. In this scenario, the data model is correspondingly modified as:
x = Γ Ψ s + n = Ψ m s + n ,
where Ψm = ΓΨ denotes the actual array manifold with the gain/phase error. In SR, the overcomplete basis Φ is constructed without considering the gain/phase error since the error matrix Γ is unknown in advance. The mismatch exists between Ψm and the corresponding columns of the predefined basis Φ, and thus, the estimation performance is degraded. Concerning the ℓ1-SVD algorithm, the mismatch caused by the gain/phase error still exists in the signal subspace so that the degraded performance is inevitable. Focused on this, an adaptive SR algorithm is proposed in this paper, which dynamically calibrates the overcomplete basis so that the sparse solution can better fit the actual scenario.

3. Adaptive Sparse Representation

The key feature of adaptive SR is the adaptive adjustment of the overcomplete basis. This process generally learns the uncertainty of the overcomplete basis, which is not available from the prior knowledge, but rather has to be estimated using multiple snapshots. Prior works on basis learning take the strategy that the whole overcomplete basis is optimized to better represent the data of multiple snapshots [2426]. However, this optimization has to solve a large amount of variables, i.e., all the elements in the overcomplete basis, and thus, the computation load is quite large. Furthermore, the optimization may deviates from the actual solution because no knowledge is added to guarantee the structure in the basis estimation. In this paper, when only gain/phase error is considered, the unknown error matrix Γ is a diagonal matrix [2123]. Then the actual overcomplete basis has specific structure and can be decomposed into two parts: one is the predefined overcomplete basis Φ, the other is the unknown error matrix Γ. Therefore, the estimation of the actual overcomplete basis can be implemented only in the error matrix part, where the number of the variables to be solved is greatly reduced and the estimated basis is more robust. In addition, in the presence of gain/phase error, the spectrum estimation in SR is degraded, reflected as spurious peaks and missing of small actual sources. When this error is small or moderate, the positions of the estimated significant sources are still reliable [5,6]. Therefore the steering vectors corresponding to the significant sources in the spectrum estimation of SR can still be served as an effective approximation of the original array manifold Ψ. With the aid of the multiple received snapshots, the covariance matrix estimation is obtained as:

R ^ = 1 L l = 1 L x l x l H ,
where L is the number of the snapshots, xl is the lth snapshot. Then the signal and noise subspaces can be effectively obtained using EVD of the covariance matrix estimation as:
R ^ = U Λ U H ,
where U = [u1,⋯,uN] are the eigenvectors corresponding to the eigenvalues λi, 1 ≤ iN. Suppose the number of the actual sources is K, and the eigenvalues λi, 1 ≤ iN is sorted in the descending order, the signal subspace is represented as Us = [u1,⋯,uK] and the noise subspace is given as Un = [uK+1, ⋯ uN]. The signal subspace provides a range space of the actual array manifold ΓΨ, i.e., span {Us} = span {ΓΨ}[21]. Furthermore, with the orthogonality between the signal and noise subspaces, we have:
span ( Γ Ψ ) U n .

Once reliable estimations of Ψ and Un can be obtained, a reasonable estimate Γ̂ is given by minimizing:

Γ ^ = min Γ ( k = 1 K U ^ n H Γ Ψ ^ k 2 ) = min Γ ( k = 1 K Ψ ^ k H Γ H U ^ n U n H Γ Ψ ^ k ) ,
where the matrix Ψ̂k, 1 ≤ kK indicates the manifold estimation, i.e., the columns corresponding to the significant elements in the spectrum estimation using SR. Additionally, the noise subspace estimation Ûn can also be obtained using EVD. With the aid of these estimations, the error matrix estimation Γ̂ can be effectively given using (16). Even though some small sources might be not included in the array manifold estimation, the subspace orthogonality is still valid between the subspace of the significant sources and the noise subspace, and thus, the solution in (16) still serves as an effective approximation of the error matrix. Although the above optimization is well-defined, the corresponding optimization is rather complicate and difficult to implement. Next, simplification is implemented to further improve this optimization process. Define that:
Γ Ψ ^ k = a k δ ,
where ak is a diagonal matrix given by:
a k = diag { Ψ ^ k } ,
and δ is a vector given by:
δ = [ Γ 11 , Γ 22 , , Γ M M ] T ,
where Γij indicates the element located at the ith row and the jth column of matrix Γ. Then the minimization in (16) can be rewritten as:
Γ ^ = min Γ δ H { k = 1 K a k H U ^ n U ^ n H a k } δ .

Similarly, we need to minimize (20) with respect to δ under the energy constraint δH w = 1, where w = [1,⋯,1]T. The result of this problem is well solved using quadratic optimization and is given by:

δ = Q 1 w / ( w H Q 1 w ) ,
where the matrix Q is given as:
Q = k = 1 K a k H U ^ n U ^ n H a k .

Then the error matrix estimation can be effectively given as Γ = diag (δ). Unlike (16), the matrix Q can be calculated in advance, and thus, the optimization in Equations (21,22) can be directly implemented. The detailed procedures of the adaptive SR algorithm are given as follows:

  • Let n = 1 and set the initial error matrix as Γ̂(0) = I.

  • Calculate the covariance matrix estimation using (13) and obtain the noise subspace as Un = [uK+1,⋯uN] using SVD.

  • At the nth iteration, the sparse solution α̂(n) is estimated by the L1 norm minimization with the overcomplete basis Γ̂(n−1)Φ as:

    α ^ ( n ) = arg min || α || 1 subject to || x Γ ^ ( n 1 ) Φ α || 2 ɛ .
    where ɛ represents a small matching allowance. This optimization can be effectively solved by convex optimization or other approximation algorithms [27]. Then if the solution is converged as | α ^ ( n ) α ^ ( n 1 ) α ^ ( n ) | ς, where ζ is a small constant, end the iteration process, otherwise, continue to steps 4–5.

  • Based on the current solution α̂(n), only significant peaks (local maxima) are extracted from the spectrum estimation and the manifold estimation is given as Ψ̂ = [sp1),⋯, spK)], where K is the number of the extracted peaks, and pK represents the corresponding column indexes.

  • Update the error matrix using the optimization in Equations (21,22). Then the n + 1 iteration is implemented as steps 3–5.

In the adaptive SR, the choice of K is quite important because either adding spurious peaks or missing actual sources may cause a subspace deviation and this impacts the estimation performance of the error matrix. Although the K value is generally unknown in the actual array scenario, there are several effective methods, such as the Akaike information criterion (AIC) or minimum description length (MDL) for estimating it [28,29]. Therefore, even if K is unknown, we can still obtain estimation of the signal subspace by only extracting the subspaces corresponding to the significant eigenvalues. The detailed process of estimating K is not discussed in this paper.

4. Simulation Result

4.1. Robustness to Gain/Phase Error

In our simulations, a ULA with N = 20 sensors is deployed. The inter-sensor spacing is half-wavelength and three far-field sources coming from angles 0°, 18°, 27° are considered. The number of the snapshots is L = 20, and the error matrix is given as Γ = diaga1ejΔθ1,⋯,ΔaNejΔθN), where the gain error obeys ΔaiN(1,1e−3) and the phase error Δθi is uniformly distributed between (−2°,2°). Here, SR implemented the L1 norm minimization at each snapshot separately and then averages them to obtain the overall performance. The L1SVD algorithm is also introduced in this part, which utilizes multiple snapshots and implements the L1 norm minimization only on the signal subspace [5]. Figure 2 gives the spectrum estimation using different methods, where the arrows indicate the positions of the actual sources. In SR, a high-resolution spectrum is expected, but it is not robust to the gain/phase error and contains spurious peaks. L1SVD can reduce the impact of the gain/phase error to some extent, but the performance is limited because the gain/phase error is still inevitable in the signal subspace. The proposed adaptive SR estimates the error matrix and adjusts the overcomplete basis. Therefore it can better match the received snapshot and owns higher estimation accuracy.

Next, the quantitative results are given to illustrate the advantages of adaptive SR. All the performance comparisons are based on 50 Monte Carlo simulations. Figure 3 depicts the mean square error (MSE) of the DOA estimation against the number of snapshots, where only peaks are extracted to evaluate the position accuracy.

Since SR deals with each snapshot separately, the addition of snapshots provides no obvious benefits for improving the performance. The performance of L1SVD does improve with the adding of snapshots. However, there are spurious peaks because the signal subspace still contains the gain/phase error. In adaptive SR, when the snapshots are not sufficient, i.e., L ≤ 4, the manifold estimation Ψ̂ is not accurate. At this case, adaptive SR cannot effectively express the range space of the actual sources and results in a large MSE. However, the performance of adaptive SR does improve with the addition of the snapshots and is better than the other two methods when the number of snapshots is relatively sufficient (L ≥ 6).

Figure 4 depicts the amplitude MSE against the number of snapshots, where the amplitude is only evaluated on the actual source positions. In this case, both adaptive SR and L1SVD can achieve desirable amplitude estimation, which is better than SR. Because the estimation performance includes both the position and amplitude accuracy, the estimation evaluation should be considered including both Figures 3 and 4. In this sense, adaptive SR is better than L1SVD in the gain/phase error scenario.

4.2. Coherent Sources

As stated above, compared with traditional SR methods like L1SVD, adaptive SR can significantly improve the estimation robustness of the gain/phase error. In addition, compared with subspace-based methods with calibration [1,21], adaptive SR can deal with coherent signal sources because the final spectrum estimation is still based on the L1 norm minimization, but not on subspace orthogonality. The following scenario is used to prove the capabilities of dealing with coherent sources of adaptive SR, where subspace-based methods with calibration are ineffective. The array parameters and the gain/phase error keep identical with that in Section 4.1. Two far-field point sources are located at angles −38°, −32°, having a high correlation of ρ12 = 0.95. As a performance comparison, MVDR is deployed as the detailed implementation of the subspace-based methods [1]. To improve the robustness to the gain/phase error, the array calibration is also employed. The detailed implement of calibration technique is given in [21,22]. As shown in Figure 5(a), when there are insufficient snapshots (L = 4) to obtain the statistical properties, the estimation of gain and phase error is not accurate and the calibration performance is limited for both MVDR and adaptive SR. At this case, the overcomplete basis mismatches with the actual array manifold and adaptive SR contains many spurious peaks. Adding more snapshots (L = 20) does help to estimate the gain/phase error matrix to calibrate the array sensors, however, the statistical properties dose not improve the estimation performance of MVDR. On the other hand, when adaptive SR has sufficient snapshots to estimate the error matrix, the spectrum estimation can be implemented with a more matched overcomplete basis. Based on this, the estimation performance is improved using L1 norm minimization and the coherent sources can be effectively separated. Therefore, even though effective calibration is deployed in MVDR, it still can not distinguish the coherent sources. On the other hand, adaptive SR can make it because the final spectrum estimation is still based on L1 norm minimization, but not on subspace orthogonality.

5. Conclusions

This paper focuses on improving the robustness of sparse representation for the DOA estimation with the gain/phase error. By dynamically calibrating the overcomplete basis and adaptively estimating the sparse solution, the proposed adaptive SR can greatly improve the estimation robustness, and thus, the solution better matches the actual scenario. Additionally, it does separate the coherent sources, which is unrealizable for subspace-based methods with calibration. The following are several considerations for further research: first, the current signal model in SR only considers the far-field point sources, however, the near-field source location is also important and meaningful in the actual scenario. Second, the convergence of the adaptive SR needs to be proved in a strict mathematical way. Finally, more adaptive mechanisms should be added to deal with the mutual coupling scenario.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (No. 40901157) and in part by the National Basic Research Program of China (973 Program, No. 2010CB731901).

References

  1. Krim, H; Viberg, M. Two decades of array signal processing research: The parametric approach. IEEE Signal Process. Mag 1996, 13, 67–94. [Google Scholar]
  2. Johnson, DH; Dudgeon, DE. Array Signal Processing-Concepts and Techniques; Prentice Hall: Upper Saddle River, NJ, USA, 1993; pp. 154–196. [Google Scholar]
  3. Schmidt, RO. A Signal Subspace Approach to Multiple Emitter Location and Spectral Estimation. PhD Dissertation, Stanford University, Stanford, CA, USA,. 1981. [Google Scholar]
  4. Capon, J. High resolution frequency-wavenumber spectrum analysis. Proc. IEEE 1969, 57, 1408–1418. [Google Scholar]
  5. Malioutov, D; Cetin, M; Willsky, AS. A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Process 2005, 53, 3010–3022. [Google Scholar]
  6. Gorodnitsky, IF; Rao, BD. Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process 1997, 145, 600–616. [Google Scholar]
  7. Jeffs, BD. Sparse Inverse Solution Methods for Signal and Image Processing Applications. IEEE International Conference on Acoustic, Speech and Signal Processing, Seattle, WA, USA, May 1998; pp. 1885–1888.
  8. Charbonnier, P; Blanc, L; Aubert, G; Barlaud, M. Deterministic edge-preserving regularization in computed imaging. IEEE Trans. Signal Process 1997, 6, 298–310. [Google Scholar]
  9. Bradley, PS; Mangasarian, OL; Street, WN. Feature selection via mathematical programming. INFORMS J. Comput 1998, 10, 209–217. [Google Scholar]
  10. Sardy, S; Tseng, P; Bruce, A. Robust wavelet denoising. IEEE Trans. Signal Process 2001, 49, 1146–1152. [Google Scholar]
  11. Çetin, M; Karl, WC. Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. Image Process 2001, 10, 623–631. [Google Scholar]
  12. Tibshirani, R. Regression shrinkage and selection via the LASSO. J. Roy Statist. Soc. Ser. B 1996, 58, 267–288. [Google Scholar]
  13. Sacchi, MD; Ulrych, TJ; Walker, CJ. Interpolation and extrapolation using a high-resolution discrete fourier transform. IEEE Trans. Signal Process 1998, 46, 31–38. [Google Scholar]
  14. Donoho, DL; Elad, M; Temlyakov, VN. Stable representation of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory 2006, 2, 6–18. [Google Scholar]
  15. Candes, E; Romberg, J; Tao, T. Stable signal representation from incomplete and inaccurate measurements. Commun. Pure Appl. Math 2006, 59, 1207–1223. [Google Scholar]
  16. Tropp, JA. Just relax: Convex programming methods for subset selection and sparse approximation. IEEE Trans. Inform. Theory 2006, 52, 1030–1051. [Google Scholar]
  17. Cotter, S; Rao, B; Engan, K; Kreutz, K. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process 2005, 53, 2477–2488. [Google Scholar]
  18. Chen, J; Huo, X. Sparse Representations for Multiple Measurement Vectors (MMV) in an Over-Complete Dictionary. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2005), Philadelphia, PA, USA, March 2005; pp. 257–260.
  19. Berg, EV; Freidelander, MP. Joint-Sparse Representation from Multiple Measurements; Technical Report,; Department of Computer Science, University of British Columbia: Vancouver, BC, Canada, 2009. [Google Scholar]
  20. Malioutov, DM; Cetin, M; Willsky, AS. Source Localization by Enforcing Sparsity through a Laplacian Prior: an SVD-Based Approach. Proceedings of IEEE Workshop on Statistical Signal Processing, MA, USA, 28 September–1 October, 2003; pp. 573–576.
  21. Viberg, M; Ottersten, B. Sensor array processing based on subspace fitting. IEEE Trans. Signal Process 1991, 39, 1110–1121. [Google Scholar]
  22. Wijnholds, SJ; Veen, VD; A-J. Multisource self-calibration for sensor arrays. IEEE Trans. Signal Process 2009, 57, 3512–3522. [Google Scholar]
  23. Weiss, AJ. Eigenstructure methods for direction finding with sensor gain and phase uncertainties. Springer Circ. Syst. Signal Process 1990, 4, 2917–2920. [Google Scholar]
  24. Yaghoobi, M; Blumensath, T; Davies, ME. Dictionary learning for sparse approximations with the majorization method. IEEE Trans. Signal Process 2009, 57, 2178–2191. [Google Scholar]
  25. Aharon, M; Elad, M; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54, 4311–4322. [Google Scholar]
  26. Wipf, DP; Rao, BD. Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process 2004, 52, 2153–2164. [Google Scholar]
  27. Boyd, S; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  28. Wax, M; Kailath, T. Detection of signals by information theoretic criteria. IEEE Trans. Acoust. Speech Sign 1985, 33, 387–392. [Google Scholar]
  29. DeRidder, F; Pintelon, R; Schoukens, J; Gillikin, DP. Modified AIC and MDL model selection criteria for short data records. IEEE Trans. Instrum. Meas 2005, 54, 144–150. [Google Scholar]
Figure 1. An illustration of the array geometry of source localization.
Figure 1. An illustration of the array geometry of source localization.
Sensors 11 04780f1 1024
Figure 2. Spectrum estimation result.
Figure 2. Spectrum estimation result.
Sensors 11 04780f2 1024
Figure 3. DOA MSE against the number of snapshots.
Figure 3. DOA MSE against the number of snapshots.
Sensors 11 04780f3 1024
Figure 4. Amplitude MSE against the number of snapshots.
Figure 4. Amplitude MSE against the number of snapshots.
Sensors 11 04780f4 1024
Figure 5. (a) Spectrum estimation result with L = 4 snapshots. (b) Spectrum estimation result with L = 20 snapshots.
Figure 5. (a) Spectrum estimation result with L = 4 snapshots. (b) Spectrum estimation result with L = 20 snapshots.
Sensors 11 04780f5 1024

Share and Cite

MDPI and ACS Style

Sun, K.; Liu, Y.; Meng, H.; Wang, X. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors. Sensors 2011, 11, 4780-4793. https://doi.org/10.3390/s110504780

AMA Style

Sun K, Liu Y, Meng H, Wang X. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors. Sensors. 2011; 11(5):4780-4793. https://doi.org/10.3390/s110504780

Chicago/Turabian Style

Sun, Ke, Yimin Liu, Huadong Meng, and Xiqin Wang. 2011. "Adaptive Sparse Representation for Source Localization with Gain/Phase Errors" Sensors 11, no. 5: 4780-4793. https://doi.org/10.3390/s110504780

Article Metrics

Back to TopTop