Next Article in Journal
Flexible Ultra-Thin Nanocomposite Based Piezoresistive Pressure Sensors for Foot Pressure Distribution Measurement
Previous Article in Journal
Area-Efficient Post-Processing Circuits for Physically Unclonable Function with 2-Mpixel CMOS Image Sensor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

On the Complementarity of Sparse L0 and CEL0 Regularized Loss Landscapes for DOA Estimation

1
Thales, 92230 Gennevilliers, France
2
SATIE, ENS Paris-Saclay, Université Paris-Saclay, CNRS, 91190 Gif-sur-Yvette, France
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(18), 6081; https://doi.org/10.3390/s21186081
Submission received: 21 July 2021 / Revised: 2 September 2021 / Accepted: 7 September 2021 / Published: 10 September 2021
(This article belongs to the Section Physical Sensors)

Abstract

:
L0 sparse methods are not widespread in Direction-Of-Arrival (DOA) estimation yet, despite their potential superiority over classical methods in difficult scenarios. This comes from the difficulties encountered for global optimization on hill-climbing error surfaces. In this paper, we explore the loss landscapes of L0 and Continuous Exact L0 (CEL0) regularized problems in order to design a new optimization scheme. As expected, we observe that the recently introduced CEL0 penalty leads to an error surface with less local minima than the L0 one. This property explains the good behavior of the CEL0-regularized sparse DOA estimation problem for well-separated sources. Unfortunately, CEL0-regularized landscape enlarges L0-basins in the middle of close sources, and CEL0 methods are thus unable to resolve two close sources. Consequently, we propose to alternate between both error surfaces to increase the probability of reaching the global solution. Experiments show that the proposed approach offers better performance than existing ones, and particularly an enhanced resolution limit.

1. Introduction

The study of Direction-Of-Arrival (DOA) estimation has a long history in signal processing. Conventional methods [1] such as beamforming or Capon’s method are still the subject of numerous works, e.g., [2]. However, they present degraded performance in the presence of multiple close sources. Subspace-based methods such as MUltiple SIgnal Classification (MUSIC) have been introduced to improve the resolution limit for multiple sources. Unfortunately, these methods fail in presence of correlated sources [3]. They also often require a priori knowledge of the number of sources and need a sufficient number of snapshots. Sparse DOA estimation has received much attention in the last decade due to its potential performance in such scenarios [4,5,6,7,8,9,10].
Sparsity naturally arises in DOA estimation when considering a discretization of the field-of-view in numerous candidate angles of arrival on a grid. The aim is to estimate a vector γ whose dimension covers the whole grid and whose only the very few entries corresponding to sources DOA are non-zero.
The purpose of sparse estimation, under the Single Measurement Vector (SMV) framework, is to retrieve the sparse vector γ C G from the noisy measurement y = B γ + w , with y C N 2 , N 2 G , knowing a dictionary B . The dictionary B depends on the array’s responses for the different angles of arrival candidates. It can be formulated as the following regularized problem:
min γ J 0 ( λ , γ ) = 1 2 B γ y 2 2 + λ γ 0 ,
where the so-called 0 -norm is defined as γ 0 = Card g { 1 , , G } : γ g 0 , γ g being the the g-th component of vector γ . The 0 -norm is the natural measure of sparsity: it counts the number of non zero components of the vector. The regularization parameter λ aims to balance the relative importance between the data fidelity term 1 / 2 B γ y 2 2 and the 0 -norm enforcing sparsity of the solution. The sparse estimation problem can also be formulated as a constraint problem. The relationship between the two 0 -problems has been studied in [11]. Based on this study, recent theoretical results [12,13] have been provided for an off-line selection of λ so that the 0 -problems are equivalent. The regularization parameter is here chosen in accordance with those results.
The 0 -minimization problem is known to be NP-hard: its resolution usually requires an exhaustive search. The use of the very recently proposed global optimization method [14] is limited to small size problems and is thus unadapted here because of its huge computational cost. So far, many suboptimal methods have therefore been proposed, as the well-known Iterative Hard Thresholding (IHT) algorithm. IHT is a proximal gradient descent algorithm: it iteratively produces estimates γ ^ ( i ) so that the cost function J 0 decreases, starting from an initial point γ ^ ( 0 ) . However, the 0 -regularized error surface J 0 exhibits numerous local minima, and convergence is only proved to a stationary point. Convex relaxation of (1) by the 1 -norm is also a popular alternative. However, conditions [15] under which the sparse vector can be reliably recovered are usually too restrictive for practical applications as in DOA estimation.
More recently, minimization of a regularized criterion using nonsmooth nonconvex but continuous penalties has drawn considerable attention [16], and it has been shown in many applications that it can yield significantly better performance than with using the 1 -norm [17]. Such penalties include q -norms ( 0 < q < 1 ), Smoothly Clipped Absolute Deviation (SCAD), and Minimax Concave Penalty (MCP) [18]. The Continuous Exact 0 (CEL0) penalty [19] corresponds to the limit case of MCP for the SMV framework. CEL0 is shown to suppress some local minima of J 0 while preserving the global one. The CEL0-regularized cost function is:
J CEL 0 ( λ , γ ) = 1 2 B γ y 2 2 + Φ CEL 0 ( λ , γ ) ,
with
Φ CEL 0 ( λ , γ ) = i I G ϕ ( λ , α i , γ i )
ϕ ( λ , α i , γ i ) = λ | γ i | 2 λ α i 2 2 α i 1 | γ i | 2 λ α i
and α i = 1 B · , i 2 2 , B · , i being the i-th column of matrix B . 𝟙 is the indicator function whose value is one if the given condition is respected and zero otherwise. Despite its promising interest, we have shown in [12] that traditional suboptimal optimization schemes of CEL0-regularized functional as Iterative Reweighted 1 (IRL1) or Forward Backward (FB) are unable to resolve close sources.
The aim of this paper is to investigate the properties of J 0 and J C E L 0 loss surfaces in order to propose a sparse optimization strategy to resolve close sources. The goal is to improve the resolution limit of both MUSIC method, limited for low Signal-to-Noise Ratios, and existing sparse methods. The proposed approach follows an iterative scheme that requires little computational cost. It shows good performance for close sources and does not require any particular initialization.
Outline of the paper: we first explain the model and the sparse DOA estimation problem in Section 2. In Section 3, we compare J 0 and J CEL 0 loss surfaces in the context of multi-source DOA estimation: an in-deep analysis of the minimizers is provided for two close sources. Based on this analysis, Section 4 presents the proposed optimization scheme, whose originality is to alternate between both loss surfaces. Numerical simulations of Section 5 finally show the validity and advantages of our approach.
Notations: Upper-case and lower-case boldface letters denote matrices and vectors, respectively. · denotes the conjugate, · T the transpose and · H the conjugate transpose of a vector or matrix. x i is the i t h component of vector x , and ω i the i t h component of the set ω . Given a matrix X , the i t h column is denoted X · , i . Considering a matrix X of dimension N × G , X ω is the submatrix of X containing the columns indicated by the set ω I G , where I G = 1 , , G is the ordered index set. Similarly, x ω is the subvector of x defined as: x ω = x ω 1 , , x ω ω T , with ω the number of elements in ω .

2. Sparse DOA Estimation Problem

2.1. On-Grid Array Signal Modeling

Consider M far field narrow band sources impinging an array of N antennas from angles θ ˜ m , m = 1 M . For a single snapshot at time t, the output array signal x ( t ) C N is expressed as x ( t ) = x 1 ( t ) x N ( t ) T = m = 1 M a ( θ ˜ m ) s ˜ m ( t ) + n ( t ) , where a ( θ ˜ m ) is the steering vector (or array response) for the direction θ ˜ m , s ˜ m ( t ) the complex envelope of the signal of the mth source, and n ( t ) C N a white gaussian noise vector of covariance E n ( t ) n H ( t ) = σ n 2 I N , where I N is the N × N identity matrix. Let us suppose the directions of the sources are part of a predefined set Θ = θ 1 , , θ G resulting from the discretization of the field-of-view, with G N : for all arrival angles θ ˜ m , m [ 1 , , M ] , there exists g [ 1 , , G ] such that θ ˜ m = θ g . This assumption is often considered in operational systems to measure the calibration table A = a ( θ 1 ) , , a ( θ G ) containing the array responses for the angles in Θ . Considering this calibration table, the measurement x ( t ) can be expressed as:
x ( t ) = A s ( t ) + n ( t ) ,
where s ( t ) C G is sparse with only M non-zero entries corresponding to the sources signals s ˜ m ( t ) , with M G .
Under the assumption that the sources are uncorrelated, it is interesting to use the vectorized covariance matrix in algorithms: it allows us to consider the contribution of multiple snapshots thus increasing the accuracy, without increasing the computational cost by much. It also has the advantage of being a SMV model, thus all associated methods can be used, and it additionally gives the possibility of estimating more sources than the number of sensors.

2.2. Vectorized Covariance Matrix Model

Considering uncorrelated sources, the covariance matrix R x x = ^ E [ x ( t ) x H ( t ) ] is given by:
R x x = m = 1 M a ( θ ˜ m ) a H ( θ ˜ m ) γ ˜ m + σ n 2 I N ,
with γ ˜ m the power of the mth source and I N the square identity matrix of dimension N. The vectorized covariance matrix, noted as r = vec ( R x x ) , is the vector obtained from the concatenation of the columns of R x x . It can be expressed as
r = m = 1 M b ( θ ˜ m ) γ ˜ m + σ n 2 vec ( I N )
with b = a a , a is the conjugate of a , and ⊗ is the Kronecker product. Considering a dictionary B = b ( θ 1 ) , , b ( θ G ) computed from the calibration matrix A , we have r = B γ + σ n 2 vec ( I N ) . Considering K finite samples, the covariance matrix is estimated by R ^ x x = 1 K k = 1 K x ( t k ) x H ( t k ) , and we denote r ^ the associated estimated vectorized covariance matrix. Let us suppose that the power of the noise is known. We consider the noisy observation vector y C N 2 computed as:
y = r ^ σ n 2 vec ( I N ) = B γ + w ,
with B C N 2 × G . The noise vector w results from the estimation of R x x with a finite number of snapshots. The power vector γ C G is sparse and the indices of non-zero components indicate the directions of the sources. The aim of sparse DOA estimation is to retrieve the indices of non-zero components of vector γ in Equation (8), through the resolution of the problem given by Equation (1).

3. Description and Numerical Investigations of the Minimizers of J 0 and J CEL 0

It is known that there are numerous local minima in J 0 , which complicates the minimization of the criterion. Initialization of iterative descent algorithms is particularly delicate, as also highlighted in the DOA estimation literature. In [12], we have successfully used CEL0 penalty for DOA estimation of well-separated sources. Those good results do not seem to be transposed to the case of close sources. It is important now to analyze more deeply the minimizers of 0 and CEL0 penalized problems in order to propose an optimization scheme able to resolve closer sources.

3.1. Simulation Setup

Although our approach is array and scenario independent, we illustrate it in this paper with the following setup. We consider an Uniform Circular Array (UCA) with N = 7 antennas and radius d = λ 0 2 , where λ 0 is the wavelength. This array could allow for two-dimensions direction-of-arrival estimation, but we limit ourselves to azimuth estimation. The −3 dB beamwidth of this array is 40°. UCAs are well known for their θ invariant performance. We study the case of M = 2 incoming sources located at θ ˜ 1 = 32° and a varying θ ˜ 2 . The number of snapshots is fixed to K = 50 . In this part, the received signal is noiseless. The field-of-view is the range [0, 360]° with a grid spacing of 0.5° ( G = 720 ). The mutual coherence, corresponding to the maximum absolute correlation between two columns of the dictionary, is in this case close to 1: 1 methods are thus ineffective.

3.2. Minimizers of J 0

Let us define I G = 1 , , G the ordered index set. For a given observation y C G and a set ω I G , we define the constrained problem ( C ω ) as follows:
( C ω ) : min γ B γ y 2 2 , s . t . γ i = 0 , i I G ω
where γ i is the ith component of vector γ . Let us note γ ^ ω the subvector of γ ^ composed only with the terms indicated by ω : γ ^ ω = γ ^ ω 1 , , γ ^ ω ω T , with ω the number of elements in ω . The constraint ensures the solutions γ ^ are sparse vectors whose indices of all non zero entries are in ω , i.e., all components whose indices are not in ω are null. We can then write γ ^ = Z p ( γ ^ ω ) , where Z p is a zero padding operator in I G :
γ ^ i = 0 if i ω γ ^ ω k for the unique k such that ω k = i .
For any ω I G , γ ^ C G solves ( C ω ) if and only if γ ^ ω C ω solves B ω H B ω x = B ω H y and γ ^ = Z p ( γ ^ ω ) . B ω is the submatrix of B composed only with the columns whose indices are in ω .
There are strong connections between the minimizers of the constrained problem ( C ω ) and the minimizers of the regularized criterion J 0 ( λ , · ) that we want to analyze. For y C N , given a set ω I G , let γ ^ solve problem ( C ω ) . Then for any λ , J 0 ( λ , · ) reaches a (local) minimum at γ ^ (Proposition 2.3 [20]). Conversely, for y C N and λ > 0 , let J 0 ( λ , · ) have a (local) minimum at γ ^ . Then γ ^ solves ( C ω ^ ) for ω ^ = supp ( γ ^ ) (Lemma 2.4 [20]). Moreover, the (local) minimum that J 0 ( λ , · ) has at γ ^ is strict iff rank ( B ω ^ ) = ω ^ (Theorem 3.2 [20]). There is thus a large number of local minima: the number of supports ω I G such that rank ( B ω ) = ω that lead to strict local minima is upper bounded by k = 0 N 2 G k .
In [20], it is shown that under mild conditions, J 0 ( λ , · ) have a unique strict global minimizer. It is common knowledge that the optimal solution of the regularized problem given by Equation (1) depends on the regularization parameter λ , which balances the relative importance between data fidelity and sparsity. In most papers, this parameter is empirically tuned. In previous works [12,13], we proposed a theoretical analysis for an off-line selection of λ . In the sequel of this paper, λ will be selected in an appropriate interval I as defined in [12,13].
Figure 1a–d represents projections of J 0 for the scenario described above, for θ ˜ 1 = 32° and θ ˜ 2 = 62° in the noiseless case. Iso-levels of J 0 are reported for a vector γ having at most two non-zero components: γ = Z p ( γ ω ) with ω = 2 . Those components correspond to fixed directions θ ω 1 = θ ˜ 1 = 32° and θ ω 2 which changes on the different figures. On (a), θ ω 2 = 47°, which corresponds to 1 2 θ ˜ 1 + θ ˜ 2 ; on (d), θ ω 2 = θ ˜ 2 = 62°. In between, we set: θ ω 2 = 52° and 57°. The values of the two components γ ω 1 and γ ω 2 , which are the only components allowed to be non-zero, are varying along the two axis. λ is fixed to 9.5, which belongs to the interval I. In each figure, we see four (local) minima: the local minimum at 0 , local minima along the axis (i.e., one non-zero component), and those corresponding to strictly two non-zero components. The global minimum (black filled circle) is located on Figure 1d for γ ω 1 = γ ω 2 = 7 and its value is 2 λ = 19 .

3.3. Minimizers of J CEL 0

The global minimizer of J 0 is preserved in J CEL 0 , but the number of local minima of J CEL 0 may be inferior to the number of local minima of J 0 . Particularly, a local minimum γ of J CEL 0 verifies | γ i | 0 [ 2 λ , + ) ; hence local minimizers of J 0 having at least one component | γ i | ( 0 , 2 λ ) are not local minimizers of J CEL 0 [21]. Figure 1e–h represents the loss surfaces of J CEL 0 . We observe the suppression of local minima on J CEL 0 for θ ω 2 close to 1 2 θ ˜ 1 + θ ˜ 2 = 47°, for which 0 < γ ω 2 < 2 λ = 4.36. Some local minima of J 0 are also only critical points of J CEL 0 . Moreover, the local minima that J 0 has at 0 is no longer one in J CEL 0 , which is particularly interesting for the initialization of iterative optimization algorithms.
However, those good properties also have a disadvantage: it leads to “flat” minima, i.e., large connected regions where the error remains approximately constant. This is illustrated on Figure 2, comparing the minimum of J 0 and J CEL 0 as a function of θ ω 1 and θ ω 2 for two (or one) non-zero components. Numerous points of the CEL0 surface are approximately at the level of the local minima corresponding to θ ω 1 = θ ω 2 = 1 2 ( θ ˜ 1 + θ ˜ 2 ) . This behavior appears for close sources when this point is also close to the global minimum.

4. Alternating between Loss Surfaces

For well separated sources, we have shown in [12] than IRL1 algorithm used to minimize J CEL 0 (IRL1-CEL0) gives better statistical results than IHT and at a lower computational cost. Indeed, IRL1-CEL0 benefits from the suppression of local minima in this case. Unfortunately, IRL1-CEL0 fails for close sources. This behavior is illustrated on Figure 3a, for true sources at 32° and 48°. Let us note that MUSIC fails for such close sources. The IRL1-CEL0 algorithm is rapidly attracted by a local bad basin corresponding to a few non-zero components for directions in the middle of the true directions (lack of resolution). In this example, denoting γ ^ the final estimated vector and ω ^ the set indicating the non-zero components, we verify that B ω ^ H B ω ^ γ ^ ω ^ = B ω ^ H y and rank ( B ω ^ ) = ω ^ = 5 : it is a strict local minimum of J 0 . The IHT algorithm also remains stuck in a local minimum with this time numerous non-zero components (Figure 3b), forming two clusters around the true directions. In order to avoid being attracted by a bad basin and take advantage of both regularizations, we propose to alternate the minimization between them. Based on this heuristic, we propose the optimization scheme ALICE-L0 (Alternated Landscapes Iterations for Complementary Enhancement for 0 ) detailed on Algorithm 1.
Algorithm 1. Optimization Scheme ALICE-L0 (Alternated Landscapes Iterations for Complementary Enhancement for 0 )
Input: dictionary B , observation y , β = 1 L , L Lipschitz-constant of B , τ 1 , τ 2 , stopping criteria
Initialization: γ ^ ( 0 ) = 0 , i = 0 , i o u t e r = 0
    •  ϵ ( i ) = B γ ^ ( i ) y 2 2
    while  | ϵ ( i 1 ) ϵ ( i 2 ) | ϵ ( i 1 ) > ϵ l i m and i o u t e r < n l i m  do
        •  i o u t e r = i o u t e r + 1
        • Compute w weighting vector by:
     w g = ( 2 λ | γ g ( i ) | ) 𝟙 | γ g ( i ) | 2 λ , g = 1 . . G
        •  j = 1 , T ( 1 ) = 1 , z ( 1 ) = γ ^ ( i )
        while  | ϵ ( i 1 ) ϵ ( i 2 ) | ϵ ( i 1 ) > ϵ l i m , 1 and j < n l i m , 1  do
            (weighted FISTA iterations)
            •  j = j + 1 , i = i + 1
            •  γ ^ ( i ) = prox · 1 , λ β w z ( j 1 ) β B H ( B z ( j 1 ) y )
            •  T ( j ) = 1 + 1 + ( 2 T ( j 1 ) ) 2 2
            •  z ( j ) = γ ^ ( i ) + T ( j 1 ) 1 T ( j ) γ ^ ( i ) γ ^ ( i 1 )
        end while
        •  k = 0
        while  | ϵ ( i 1 ) ϵ ( i 2 ) | ϵ ( i 1 ) > ϵ l i m , 2 and k < n l i m , 2  do
            (IHT iterations)
            •  k = k + 1 , i = i + 1
            •  γ ^ ( i ) = prox · 0 , λ β γ ^ ( i 1 ) β B H ( B γ ^ ( i 1 ) y )
        end while
        •  ϵ l i m , 1 = τ 1 ϵ l i m , 1 , ϵ l i m , 2 = τ 2 ϵ l i m , 2
    end while
    return  γ ^ ( i e n d )
We start the minimization considering the CEL0-regularized functional, and using γ ^ ( 0 ) = 0 as initialization. Indeed, we previously saw that this local minimum in J 0 is suppressed in J CEL 0 . Iterations of the weighted Fast Iterative Soft Thresholding Algorithm (weighted FISTA) aim to minimize the convex majorizer of the nonconvex CEL0 functional. For a weighting vector w , iterations use the proximal of the weighted 1 function, defined component by component as:
prox · 1 , λ β w g ( x ) = max 0 , 1 λ β w | x g | x g 1 | x g | 0 .
After some iterations, the estimated vector is used as initialization for IHT, which performs minimization steps over the 0 -regularized cost function. The hard threshold corresponds to the proximal of the 0 -norm, defined by:
prox · 0 , λ β g ( x ) = x g 1 | x g | λ β .
We then loop back to alternate between the loss surfaces. The behavior of our algorithm is represented on Figure 3c for close sources (in this article, we set ϵ l i m = 1 × 10 6 , ϵ l i m , 1 = 1 × 10 2 , ϵ l i m , 2 = 1 × 10 6 , n l i m = 2000 , n l i m , 1 = 200 , n l i m , 2 = 200 , τ 1 = 0.9 , τ 2 = 1 ). In this noiseless case, this is the only algorithm attaining the global minimum of J 0 .

5. Statistical Performance

The purpose of this section is to numerically quantify the algorithms’ performance as a function of the sources’ separation. For that, two criteria will be used: the first one is the percentage of outliers (only one estimated direction or located at more than half a beamwidth of the true directions), and the second one the Root-Mean-Square-Error (RMSE) between estimated and true directions, calculated without outliers. The simulation setup is the one described in Section 3.1, for a Signal-to-Noise Ratio per source equal to 0 dB. Results are presented in Figure 4: we observe that the proposed scheme ALICE-L0 outperforms other methods in terms of statistical accuracy and resolution limit. Let us note that the behavior of IHT is unreliable with noise, and thus statistical results are not presented.

6. Conclusions

We linked the operating limits of IHT and IRL1-CEL0 to the properties of corresponding loss landscapes in DOA estimation. To avoid the weaknesses of both criteria, an optimization scheme is proposed using alternatively J 0 and J CEL 0 , i.e., alternating between the two regularizations. A particular implementation using λ obtained by [12,13] has been successfully tested, improving for example the resolution limit. Ongoing work concerns algorithm parameters (when to change of regularization) which are here left to the user.

Author Contributions

Conceptualization, A.D., A.F. and P.L.; methodology, A.D., A.F. and P.L.; software, A.D.; validation, A.D., A.F. and P.L.; formal analysis, A.D. and A.F.; investigation, A.D.; writing—original draft preparation, A.D.; writing—review and editing, A.D., A.F. and P.L.; visualization, A.D.; supervision, A.F. and P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Agence Nationale de la Recherche et de la Technologie (2018/0698). The work reported in this manuscript has resulted in the French patent number 2107767.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Trees, H.L. Optimum Array Processing; Part IV of Detection, Estimation, and Modulation Theory; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2002. [Google Scholar]
  2. Avitabile, G.; Florio, A.; Coviello, G. Angle of Arrival Estimation Through a Full-Hardware Approach for Adaptive Beamforming. IEEE Trans. Circuits Syst. II: Express Briefs 2020, 67, 3033–3037. [Google Scholar] [CrossRef]
  3. Stoica, P.; Nehorai, A. MUSIC, Maximum Likelihood, and Cramer-Rao Bound. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 720–741. [Google Scholar] [CrossRef]
  4. Wang, W.; Wu, R. High Resolution Direction of Arrival (DOA) Estimation Based on Improved Orthogonal Matching Pursuit (OMP) Algorithm by Iterative Local Searching. Sensors 2013, 13, 11167–11183. [Google Scholar] [CrossRef] [PubMed]
  5. Liu, J.; Zhou, W.; Juwono, F.H. Joint Smoothed l0-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar. Sensors 2017, 17, 1068. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Yang, Z.; Li, J.; Stoica, P.; Xie, L. Chapter 11 - Sparse methods for direction-of-arrival estimation. In Academic Press Library in Signal Processing; Chellappa, R., Theodoridis, S., Eds.; Academic Press: Cambridge, MA, USA, 2018; Volume 7, pp. 509–581. [Google Scholar] [CrossRef] [Green Version]
  7. Wu, X.; Zhu, W.; Yan, J. A high-resolution DOA estimation method with a family of nonconvex penalties. IEEE Trans. Veh. Technol. 2018, 67, 4925–4938. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, Z.; Wu, X.; Li, C.; Zhu, W.P. An p-Norm Based Method for Off-Grid DOA Estimation. Circuits Syst. Signal Process. 2019, 38, 904–917. [Google Scholar] [CrossRef]
  9. Fan, Y.; Wang, J.; Du, R.; Lv, G. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector. Sensors 2018, 18, 1815. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Soubies, E.; Chinatto, A.; Larzabal, P.; Romano, J.M.T.; Blanc-Féraud, L. Direction-of-Arrival Estimation Through Exact Continuous 2,0-Norm Relaxation. IEEE Signal Process. Lett. 2021, 28, 16–20. [Google Scholar] [CrossRef]
  11. Nikolova, M. Relationship between the optimal solutions of least squares regularized with L0-norm and constrained by k-sparsity. Appl. Comput. Harmon. Anal. 2016, 41, 237–265. [Google Scholar] [CrossRef] [Green Version]
  12. Delmer, A.; Ferréol, A.; Larzabal, P. L0-Sparse DOA Estimation of Close Sources with Modeling Errors. In Proceedings of the 2020 28th European Signal Processing Conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2020. [Google Scholar]
  13. Delmer, A.; Ferréol, A.; Larzabal, P. On Regularization Parameter for L0-Sparse Covariance Fitting Based DOA Estimation. In Proceedings of the ICASSP 2020, IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, 4–8 May 2020. [Google Scholar]
  14. Marmin, A.; Castella, M.; Pesquet, J.C. How to Globally Solve Non-convex Optimization Problems Involving an Approximate L0 Penalization. In Proceedings of the ICASSP 2019, IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019; pp. 5601–5605. [Google Scholar] [CrossRef] [Green Version]
  15. Candes, E.J. The Restricted Isometry Property and Its Implications for Compressed Sensing. Comptes Rendus Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  16. Selesnick, I. Sparse Regularization via Convex Analysis. IEEE Trans. Signal Process. 2017, 65, 4481–4494. [Google Scholar] [CrossRef]
  17. Wen, F.; Chu, L.; Liu, P.; Qiu, R.C. A Survey on Nonconvex Regularization-Based Sparse and Low-Rank Recovery in Signal Processing, Statistics, and Machine Learning. IEEE Access. 2018, 6. [Google Scholar] [CrossRef]
  18. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [Green Version]
  19. Soubies, E.; Blanc-Féraud, L.; Aubert, G. A Continuous Exact L0 penalty (CEL0) for least squares regularized problem. SIAM J. Imaging Sci. 2015, 8, 1102492. [Google Scholar] [CrossRef]
  20. Nikolova, M. Description of the minimizers of least squares regularized with 0-norm. Uniqueness of the global minimizer. SIAM J. Imaging Sci. 2013, 6, 904–937. [Google Scholar] [CrossRef]
  21. Soubies, E.; Blanc-Féraud, L.; Aubert, G. New Insights on the Optimality Conditions of the l2-l0 Minimization Problem. J. Math. Imaging Vis. 2020, 62, 808–824. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Loss surfaces of J 0 (ad) and J CEL 0 (eh) as a function of γ ω 1 and γ ω 2 , for γ = Z p ( γ ω ) , with ω = 2 , i.e., at most two non-zero components corresponding to directions θ ω 1 = θ ˜ 1 and a varying θ ω 2 . When those directions correspond to the true ones θ ˜ 1 = 32° and θ ˜ 2 = 62° (d and h) the global minimum indicated by a black filled circle ( γ ω 1 = γ ω 2 = 7 ) is equal to 2 λ = 19 . Local minima are indicated by the blue asterisks, while light blue diamonds represent critical points that are not local minima.
Figure 1. Loss surfaces of J 0 (ad) and J CEL 0 (eh) as a function of γ ω 1 and γ ω 2 , for γ = Z p ( γ ω ) , with ω = 2 , i.e., at most two non-zero components corresponding to directions θ ω 1 = θ ˜ 1 and a varying θ ω 2 . When those directions correspond to the true ones θ ˜ 1 = 32° and θ ˜ 2 = 62° (d and h) the global minimum indicated by a black filled circle ( γ ω 1 = γ ω 2 = 7 ) is equal to 2 λ = 19 . Local minima are indicated by the blue asterisks, while light blue diamonds represent critical points that are not local minima.
Sensors 21 06081 g001aSensors 21 06081 g001b
Figure 2. Minimum of the loss surfaces of J 0 (a) and J CEL 0 (b) for γ = Z p ( γ ω ) , with ω = 2 , as a function of θ ω 1 and θ ω 2 . The diagonal corresponds to θ ω 1 = θ ω 2 , i.e., ω = 1 .
Figure 2. Minimum of the loss surfaces of J 0 (a) and J CEL 0 (b) for γ = Z p ( γ ω ) , with ω = 2 , as a function of θ ω 1 and θ ω 2 . The diagonal corresponds to θ ω 1 = θ ω 2 , i.e., ω = 1 .
Sensors 21 06081 g002
Figure 3. Solutions as iterations go by for close sources with no noise, for λ = 0.78 . X-axis: iteration number. Y-axis: directions associated with components of γ ^ . The color represents the level of the components. True sources θ ˜ 1 = 32° and θ ˜ 2 = 48° are indicated by the black doted lines. Corresponding components of the optimal solution are equal to 7, all others are null.
Figure 3. Solutions as iterations go by for close sources with no noise, for λ = 0.78 . X-axis: iteration number. Y-axis: directions associated with components of γ ^ . The color represents the level of the components. True sources θ ˜ 1 = 32° and θ ˜ 2 = 48° are indicated by the black doted lines. Corresponding components of the optimal solution are equal to 7, all others are null.
Sensors 21 06081 g003
Figure 4. Performance as a function of the sources separation: percentage of outliers and RMSE (not reported when having more than 50% outliers). The regularization parameter λ is fixed to 0.78 according to the theoretical analysis [12].
Figure 4. Performance as a function of the sources separation: percentage of outliers and RMSE (not reported when having more than 50% outliers). The regularization parameter λ is fixed to 0.78 according to the theoretical analysis [12].
Sensors 21 06081 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Delmer, A.; Ferréol, A.; Larzabal, P. On the Complementarity of Sparse L0 and CEL0 Regularized Loss Landscapes for DOA Estimation. Sensors 2021, 21, 6081. https://doi.org/10.3390/s21186081

AMA Style

Delmer A, Ferréol A, Larzabal P. On the Complementarity of Sparse L0 and CEL0 Regularized Loss Landscapes for DOA Estimation. Sensors. 2021; 21(18):6081. https://doi.org/10.3390/s21186081

Chicago/Turabian Style

Delmer, Alice, Anne Ferréol, and Pascal Larzabal. 2021. "On the Complementarity of Sparse L0 and CEL0 Regularized Loss Landscapes for DOA Estimation" Sensors 21, no. 18: 6081. https://doi.org/10.3390/s21186081

APA Style

Delmer, A., Ferréol, A., & Larzabal, P. (2021). On the Complementarity of Sparse L0 and CEL0 Regularized Loss Landscapes for DOA Estimation. Sensors, 21(18), 6081. https://doi.org/10.3390/s21186081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop