Next Article in Journal
The Geometry of Signal Detection with Applications to Radar Signal Processing
Next Article in Special Issue
Angular Spectral Density and Information Entropy for Eddy Current Distribution
Previous Article in Journal
A Novel Sequence-Based Feature for the Identification of DNA-Binding Sites in Proteins Using Jensen–Shannon Divergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation

1
College of Information and Communications Engineering, Harbin Engineering University, Harbin 150001, China
2
College of Communication and Information Engineering, Qiqihar University, Qiqihar 161006, China
3
College of Engineering, Huazhong Agricultural University, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Entropy 2016, 18(10), 380; https://doi.org/10.3390/e18100380
Submission received: 27 August 2016 / Revised: 15 October 2016 / Accepted: 20 October 2016 / Published: 24 October 2016
(This article belongs to the Special Issue Maximum Entropy and Its Application II)

Abstract

:
A robust sparse least-mean mixture-norm (LMMN) algorithm is proposed, and its performance is appraised in the context of estimating a broadband multi-path wireless channel. The proposed algorithm is implemented via integrating a correntropy-induced metric (CIM) penalty into the conventional LMMN algorithm to modify the basic cost function, which is denoted as the CIM-based LMMN (CIM-LMMN) algorithm. The proposed CIM-LMMN algorithm is derived in detail within the kernel framework. The updating equation of CIM-LMMN can provide a zero attractor to attract the non-dominant channel coefficients to zeros, and it also gives a tradeoff between the sparsity and the estimation misalignment. Moreover, the channel estimation behavior is investigated over a broadband sparse multi-path wireless channel, and the simulation results are compared with the least mean square/fourth (LMS/F), least mean square (LMS), least mean fourth (LMF) and the recently-developed sparse channel estimation algorithms. The channel estimation performance obtained from the designated sparse channel estimation demonstrates that the CIM-LMMN algorithm outperforms the recently-developed sparse LMMN algorithms and the relevant sparse channel estimation algorithms. From the results, we can see that our CIM-LMMN algorithm is robust and is superior to these mentioned algorithms in terms of both the convergence speed rate and the channel estimation misalignment for estimating a sparse channel.

1. Introduction

The adoption of adaptive filtering in wireless communication, speech signal processing, radar signal processing and adaptive control applications, for obtaining fast convergence and stability in estimation, has been studied over many years [1,2,3]. In recent decades, adaptive filtering has been receiving ever increasing attention, thanks to new developments, such as sparse signal processing [4,5,6] and compressive sensing [7,8,9]. The improved adaptive filtering techniques pave the way to new added-value for dealing with practical engineering problems and enable a full exploitation of the sparsity-aware property of the practical signals in our life [4,5,6,10,11,12,13], such as a broadband multi-path channel. However, traditional adaptive filter algorithms used in sparse channel estimation have unique challenges, both in terms of their convergence and estimation misalignment, thus requiring new algorithms.
The broadband multi-path wireless communication channel is commonly found to be sparse based on the measurement results [14,15,16,17,18]. For example, in broad-band wireless communication systems, a “hilly terrain (HT)” delay profile generally takes a sparsely-distributed multipath form, which has been considered for channel estimation and equalization [15,16]. Furthermore, the adaptive filter technique has been widely considered to get good channel state information to enhance the performance of a wireless communication system, in which the adaptive filter has been used for channel estimation [1,2,3,19]. Among the various adaptive filter algorithms, the least mean square (LMS) algorithm has been extensively used for channel estimation. Although the LMS algorithm can obtain good channel estimation performance, it is inappropriate to deal with sparse channels [10,11,12,13]. As is well-known, the broadband signal transmission technique will be implemented over a wireless channel in the next-generation of wireless communication systems [11]. As a broadband wireless communication channel, it often incurs a frequency-selective phenomenon, which relates to channel fading behavior. Thus, the broadband channel in such environments can be defined as a sparse channel structure, which is dominated by very few large taps.
To address this problem, sparse adaptive filtering algorithms have been presented and proposed for sparse broadband channel estimation and sparse system identification applications [4,5,6,10,11,12,13,20,21,22,23,24,25,26,27,28,29]. In [11], a broadband sparse multi-path channel is used for the sparse channel estimation application, which is implemented by using sparse normalized LMS (NLMS) algorithms. These sparse adaptive algorithms can be categorized into two groups, namely proportionate-type algorithms [20,21,22] and zero-attracting algorithms [4,5,6,10,11,12,13,23,24,25,26,27,28]. The proportionate-type algorithms aim to assign different weighting to each coefficient according to the magnitudes of the channel coefficients [20], which will increase the computational load. Recently, a new kind of sparse adaptive filter algorithm has been proposed on the basis of compressive sensing concepts. One of the most popular sparse adaptive filter algorithms is the zero-attracting LMS (ZA-LMS) algorithm [4], which is realized by incorporating an l 1 -norm penalty into the cost function of the traditional LMS algorithm to give rise to a zero attractor term. As a result, the proposed ZA-LMS can accelerate the convergence rate because of the designed zero attractor term, which quickly attracts the zero coefficients and near zero coefficients to zero. Furthermore, an enhanced ZA-LMS algorithm named the reweighted ZA-LMS (RZA-LMS) has been reported based on a log-sum penalty [4], which provides a reweighting factor in comparison with the ZA-LMS algorithm to selectively exert zero attraction on the channel coefficients. After that, several variants of LMS based on zero attracting theory have been reported and used for channel estimation [5,6,23]. Although these sparse LMS algorithms can provide good channel estimation performance for ensuring the stability of the broadband wireless propagation, they are sensitive to the scaling of the inputs [24,25,26].
Consequently, the zero attracting techniques have been utilized to exploit sparse least mean fourth (LMF) [24], the sparse affine projection (AP) algorithm [12,13], sparse least mean square/fourth (LMS/F) [25,26,27] and other sparse adaptive filtering algorithms [28]. Although these sparse adaptive filter algorithms can effectively improve the performance of the LMS algorithms, some of them are computationally complex, and others should tradeoff the key parameter to give a balance in performance. To take advantage of the zero attraction and to improve the drawbacks of the LMS algorithm, a mixture sparse adaptive filter algorithm denoted as sparse least-mean mixture-norm (LMMN) [30,31] has been proposed and used for broadband sparse channel estimation applications, including the zero-attracting (ZA) LMMN (ZA-LMMN) and reweighted ZA-LMMN (RZA-LMMN) algorithms [32]. As a result, these sparse LMMN algorithms can provide a better performance in comparison with the LMS, LMF, LMS/F and their relevant sparse forms.
In this paper, a robust correntropy-induced metric-constrained LMMN (CIM-LMMN) algorithm is proposed to fully exploit the sparse property for the sparse broadband wireless multi-path channel estimation application. In the proposed CIM-LMMN algorithm, a correntropy-induced metric criterion is implemented within the kernel framework and integrated into the cost function of the traditional LMMN algorithm to create a zero attractor in the update function. The derivation of the proposed CIM-LMMN algorithm is given in detail, and its channel estimation behavior is evaluated over a sparse multi-path wireless communication channel. The channel estimation results obtained from computer simulation are given to show that the proposed CIM-LMMN algorithm is superior to the traditional LMS, LMF, LMS/F and their sparse forms in terms of the convergence speed rate and the steady-state misalignment.
The rest of this paper is organized as follows. In Section 2, the traditional LMMN algorithm and ZA techniques are addressed and stated within the adaptive filtering framework. In Section 3, we propose our CIM-LMMN algorithm in the framework of the mixed error criterion, CIM and ZA theories. In Section 4, the channel estimation behaviors of the proposed CIM-LMMN algorithm are investigated in the context of a sparse finite impulse response (FIR) multi-path channel. Finally, we summarize this work in Section 5.

2. Traditional LMMN Algorithm and ZA Technique

2.1. Traditional LMMN Algorithm

We address the traditional LMMN algorithm [30,31] based on the framework of a channel estimation system. Considering a broadband multi-path wireless communication channel whose finite impulse response (FIR) channel vector is defined as h = [ h 0 , h 1 , , h N 1 ] T , where N is the total number of channel coefficients, which contains K dominated channel coefficients whose magnitudes are non-zeros. Herein, K N . A training signal x ( n ) = [ x ( n ) , x ( n 1 ) , , x ( n N 1 ) ] T is conveyed to the unknown sparse broadband multi-path channel h . Thus, a measured desired signal d ( n ) at the receiver side can be depicted as:
d ( n ) = h T x ( n ) + u ( n ) ,
where u ( n ) is a zero mean Gaussian noise, which is always present in wireless communication channels. Furthermore, u ( n ) is assumed to be independent with the training signal x ( n ) . The purpose of channel estimation based on the LMMN algorithm is to estimate h by minimizing the instantaneous error e ( n ) , which is the difference between the desired signal d ( n ) and the channel estimation output y ^ ( n ) given by y ^ ( n ) = h ^ T ( n ) x ( n ) . Here, h ^ ( n ) is an estimated channel vector. As reported by adaptive filtering theory, the cost function of the traditional LMMN algorithm is written as [30,31]:
J LMMN ( n ) = χ 2 J 2 ( n ) + 1 χ 4 J 4 ( n ) ,
where J 2 ( n ) is defined as J 2 ( n ) = Δ E { e 2 ( n ) } and J 4 ( n ) is given by J 4 ( n ) = Δ E { e 4 ( n ) } . It is worth noting that the cost function J LMMN ( n ) of the traditional LMMN algorithm shown in (2) is a linear combination of J 2 ( n ) and J 4 ( n ) . Furthermore, parameter χ [ 0 , 1 ] is to give a balance of the combination between J 2 ( n ) and J 4 ( n ) . The gradient of J LMMN ( n ) with respect to h ^ ( n ) is given by:
J LMMN ( n ) = J LMMN ( n ) h ^ ( n ) = E e ( n ) χ + 1 χ e 2 ( n ) x ( n ) ,
where E [ · ] is an expectation operator. By using the stochastic gradient minimization method, we can approximate the exact gradient shown in (3). Then, we can write the updating equation of the traditional LMMN algorithm as [31]:
h ^ ( n + 1 ) = h ^ ( n ) + μ e ( n ) { χ + ( 1 χ ) e 2 ( n ) } x ( n ) ,
where μ is added to act as a step size to control the convergence of the traditional LMMN algorithm. It is clear to see that the traditional LMMN algorithm is a basic LMS algorithm for χ = 1 , while it converts to an LMF algorithm for χ = 0 . Thus, the LMMN algorithm can fully utilize the advantages of both the LMS and LMF algorithms by properly choosing χ.

2.2. ZA Technique

Here, we will discuss the ZA technique on the basis of the traditional LMMN algorithm. According to our previous work, a modified cost function has been constructed to give rise to the ZA-LMMN algorithm. Then, the cost function of the ZA-LMMN algorithm is expressed as [32]:
J ZA ( n ) = χ 2 J 2 ( n ) + 1 χ 4 J 4 ( n ) + λ h ^ ( n ) 1 ,
where λ R + is a regularization parameter used for providing a balance between channel estimation error and the sparsity penalty. Here, h ^ ( n ) 1 is the l 1 -norm of h ^ ( n ) . We use the stochastic gradient approximation method to form a solution of the ZA-LMMN algorithm whose updating equation is described as [32]:
h ^ ( n + 1 ) = h ^ ( n ) + μ ZA e ( n ) χ + 1 χ e 2 ( n ) x ( n ) Traditional LMMN ρ ZA sgn h ^ ( n ) Sparse penalty ZA LMMN ,
where ρ ZA = λ μ ZA is a ZA controlling factor, which is to tradeoff the ZA ability, μ ZA is the step size of the ZA-LMMN algorithm and sgn { h ^ ( n ) } denotes the element-wise sign operator. In comparison with the traditional LMMN algorithm, we find that the ZA-LMMN algorithm provides an additional term ρ ZA sgn h ^ ( n ) , which is used for attracting the zero channel coefficients to zero quickly. Furthermore, ρ ZA sgn h ^ ( n ) is defined as a zero attractor in the ZA-LMMN algorithm. Generally speaking, this zero attractor can give an accelerated convergence for the ZA-LMMN algorithm to handle a broadband multi-path sparse channel whose coefficients are dominated by zeros.

3. Proposed Sparse CIM-LMMN Algorithm

Since the LMMN algorithm combines the advantages of both the LMS and LMF algorithms, we propose a CIM-LMMN algorithm to fully exploit the sparsity property of the broadband multi-path wireless channel. Based on the ZA techniques [4,5,6,10,11,12,13,22,23,24,25,26,27,28,32] and CIM theory [33,34,35,36,37,38,39,40], we propose a robust sparse LMMN algorithm by exerting a CIM penalty on the channel coefficient vector, and we utilize this constrained term to modify the cost function of the traditional LMMN algorithm. As we know, in the CIM theory, CIM can be used for measuring a similarity in kernel space between two random vectors p = { p 1 , , p N } and q = { q 1 , , q N } , which can be described as [33,34,35,36,37,38,39,40]:
CIM p , q = ( k 0 V ^ p , q ) 1 / 2 ,
where k ( 0 ) = 1 σ 2 π , and:
V ^ ( p , q ) = E ( k ( p , q ) ) = k ( p , q ) d F P Q ( p , q ) ,
where σ represents the kernel width, k ( · , · ) denotes a shift-invariant Mercer kernel [38,39,40,41] and F P Q ( p , q ) is a joint distribution. In practical engineering, the distribution of the data is unknown. Thus, the correntropy is rewritten as V ^ p , q = 1 1 N N i = 1 N k p i , q i . Here, we consider a typical kernel in correntropy, which is a Gaussian kernel defined as:
k p , q = 1 σ 2 π exp e 2 2 σ 2 ,
where e = p q . The CIM can be used for accounting for the number of non-zero channel coefficients, and hence, we have:
CIM 2 p , 0 = k 0 N i = 1 N 1 exp p i 2 2 σ 2 .
It also can be seen that the CIM is a nonlinear metric in the input space.
Then, we present our proposed CIM-LMMN algorithm by using the ZA technique and CIM theory. Herein, we integrate the Gaussian kernel-based CIM into the cost function of the traditional LMMN algorithm. As a result, we can get the cost function of the proposed CIM-LMMN algorithm, which is given by:
J CIM ( n ) = χ 2 J 2 ( n ) + 1 χ 4 J 4 ( n ) + λ CIM CIM 2 ( h ^ ( n ) , 0 ) J ^ CIM ( n ) = χ 2 e 2 ( n ) + 1 χ 4 e 4 ( n ) + λ CIM k ( 0 ) N i = 1 N ( 1 exp ( h ^ i ( n ) / 2 σ 2 ) ) ,
where e ( n ) is the instantaneous error at instant n and e ( n ) = d ( n ) y ^ ( n ) . Here, 0 χ 1 , which is used for controlling the mixture of J 2 ( n ) and J 4 ( n ) . Furthermore, we employ the stochastic gradient approximation to form the final term in Equation (11). The stochastic gradient of Equation (11), which defines the search direction, is illustrated as:
J ^ CIM ( n ) = Δ J ^ CIM ( n ) h ^ ( n ) = { e ( n ) { χ + ( 1 χ ) e 2 ( n ) } x ( n ) } + λ CIM 1 N σ 3 2 π h ^ i ( n ) exp h ^ i 2 ( n ) 2 σ 2 .
By introducing a step size η, the updating equation of the proposed CIM-LMMN algorithm for each coefficient can be expressed as:
h ^ i ( n + 1 ) = h ^ i ( n ) η J ^ CIM ( n ) = h ^ i ( n ) + η e ( n ) { χ + ( 1 χ ) e 2 ( n ) } x i ( n ) ρ CIM 1 N σ 3 2 π h ^ i ( n ) exp h ^ i 2 ( n ) 2 σ 2   ,
where ρ CIM = η λ CIM > 0 is a regularization parameter used for balancing the estimation error and sparsity penalty. The matrix-vector form of Equation (13) converts to:
h ^ ( n + 1 ) = h ^ ( n ) + η e ( n ) { χ + ( 1 χ ) e 2 ( n ) } x ( n ) ρ CIM 1 N σ 3 2 π h ^ ( n ) exp h ^ ( n ) 2 2 σ 2 .
Compared with the traditional LMMN algorithm, there is a ZA term in the iterations of the proposed CIM-LMMN algorithm. The ZA ability is controlled by parameter ρ CIM . Moreover, the computational complexity is still low, which needs only 3 N additions, 2 N multiplications and N exponential calculations.
From the deduction of the proposed CIM-LMMN algorithm, we can see that the CIM 2 ( p , 0 ) is close to the l 0 -norm when σ 0 [41] if h ^ i ( n ) > σ , h ^ i ( n ) 0 . Thus, the CIM given in (10) can provide an approximation for the l 0 -norm, which has been widely used for sparsity exploitation. By using this CIM in the cost function of the traditional LMMN algorithm, a new cost function J CIM ( n ) has been created. Similarly, the updating equation of the proposed CIM-LMMN algorithm is obtained by using a stochastic gradient minimization method, which is shown in (14). It is found that there is an additional term ρ CIM 1 N σ 3 2 π h ^ ( n ) exp ( h ^ ( n ) 2 2 σ 2 ) , which is an extra zero attractor. Thus, we can conclude that the proposed CIM-LMMN algorithm can be denoted as:
h ^ ( n + 1 ) = h ^ ( n ) + η e ( n ) { χ + ( 1 χ ) e 2 ( n ) } x ( n ) LMMN algorithm + CIM based zero attractor CIM LMMN algorithm .
Thus, the proposed CIM-LMMN is also a zero-attracting adaptive filtering algorithm, which is implemented by integrating a CIM-based zero attractor term into the update equation of the traditional LMMN algorithm. Moreover, the designated zero attractor can also be realized by using l p -norm [6,22], combined l 0 and l 1 norms [4,5,10,27] and smooth approximated l 0 -norm [5,11].

4. Channel Estimation Performance Investigation

On the basis of the previous research, we give the channel estimation behaviors of the newly-developed CIM-LMMN algorithm over a broadband sparse multi-path wireless communication channel, which is similar to the experiment setup in [6,11,13,22,23,24,25,26,27,28,32]. To verify the effectiveness of the proposed CIM-LMMN algorithm, we also compare its performance with the traditional LMS, LMF, LMS/F and their relevant sparsity-aware algorithms. In the experiments, each point for all of the used adaptive filtering algorithms is set to 500 Monte Carlo runs. In this paper, we use a multi-path FIR channel with 16 taps to evaluate the channel estimation behaviors of the CIM-LMMN algorithm. Moreover, the number of dominant channel coefficients is defined as the sparsity level, and it is marked as K. Herein, we investigate the sparsity level for K = 1 , K = 2 and K = 4 , which is similar to [4,5,6,10,11,12,13,22,23,24,25,26,27,28,32]. In fact, as a sparse channel, K taps are non-zeros, while the other ( N K ) channel taps are set to zeros. In all of the simulations, K dominant channel coefficients are created under a random distribution, and the positions of the K taps are also randomly distributed within the length of the designated sparse channel, which is subjected to h ^ 2 2 = 1 . A white Gaussian random signal is used as the training signal, and the noise signal u ( n ) is assumed to be independent of x ( n ) . The received signal is normalized, and the noise power is set to be 1 × 10 1 in all of the experiments. We use the mean square error (MSE) to measure the channel estimation behavior, and the MSE is defined as MSE = 10 log h h ^ ( n ) 2 .
Firstly, we will investigate the convergence of the proposed CIM-LMMN algorithm. To the best of our knowledge, the previously-presented mixed LMS/F algorithm converges faster than the LMS and LMF algorithm. Thus, we compare the convergence of the CIM-LMMN algorithm with the LMS, LMS/F, LMMN, ZA-LMS/F, reweighted ZA-LMS/F (RZA-LMS/F), ZA-LMMN and reweighted ZA-LMMN (RZA-LMMN) algorithms. In this experiment, the simulation parameters are: μ LMS = 0 . 004 , μ LMSF = 0 . 01 , μ = 0 . 0065 , χ = 0 . 5 , μ ZALMSF = 0 . 15 , μ ZA = 0 . 0104 , μ RZALMSF = 0 . 022 , μ RZA = 0 . 012 , η = 0 . 015 , σ = 0 . 01 , ρ ZALMSF = 5 × 10 5 , ρ ZA = 6 × 10 5 , ρ RZALMSF = 5 × 10 3 , ρ RZA = 2 × 10 4 , ρ CIM = 1 . 8 × 10 4 , where μ LMS , μ LMSF , μ ZALMSF , μ RZALMSF and μ RZALMMN are the step sizes of the LMS, LMS/F, ZA-LMS/F, RZA-LMS/F and RZA-LMMN algorithms, while ρ ZALMSF , ρ RZALMSF and ρ RZALMMN are the regularization parameters of the ZA-LMS/F, RZA-LMS/F and RZA-LMMN algorithms. The results of the convergence study of the proposed CIM-LMMN algorithm are reported in Figure 1. In this simulation, step sizes are chosen to obtain the same MSE for each algorithm. From this figure, one can foresee that our proposed CIM-LMMN algorithm converges fastest with the same channel estimation error floor. Compared with the previously-proposed ZA-LMMN and RZA-LMMN algorithms, the CIM-LMMN algorithm achieves a stable MSE floor in 100 fewer iterations.
Secondly, we will investigate the channel estimation performance with respect to the MSE floor to show the superior behavior of the CIM-LMMN algorithm. Furthermore, we will compare its channel estimation behavior with the LMS, LMS/F, LMMN, ZA-LMS/F, ZA-LMMN, RZA-LMS/F and RZA-LMMN algorithms. Furthermore, we also investigate the effects of the sparsity levels for K = 1 , K = 2 and K = 4 . The CIM-LMMN adaptive filtering algorithm is affected by the step size, regularization parameter, kernel width, χ and sparsity level. Thus, in this experiment, the simulation parameters related to the above-mentioned algorithms are set to give nearly same initial convergence to compare the MSE at different sparsity levels, and these parameters are: μ LMS = 0 . 006 , μ LMSF = 0 . 0125 , μ = 0 . 0065 , μ ZALMSF = μ RZALMSF = 0 . 012 , μ ZA = μ RZA = 0 . 006 , ρ ZALMSF = ρ ZA = 5 × 10 5 , ρ RZALMSF = 2 × 10 4 , ρ RZA = 3 × 10 4 , η = 0 . 006 , ρ CIM = 5 × 10 5 . The channel estimation performance of the proposed CIM-LMMN algorithm for K = 1 , K = 2 and K = 4 are illustrated in Figure 2, Figure 3 and Figure 4, respectively. We can see that the proposed CIM-LMMN algorithm has the lowest MSE for K = 1 . Our proposed CIM-LMMN algorithm can achieve a 3-dB gain in comparison with the RZA-LMMN algorithm. The sparsity level effects on the CIM-LMMN algorithm from K = 2 to K = 4 given in Figure 3 and Figure 4 show that our proposed CIM-LMMN algorithm is still superior to the mentioned adaptive filtering algorithm for sparse channel estimation. It is found that the CIM-LMMN algorithm has about 2.5-dB and 1-dB gains compared with the RZA-LMMN algorithm for K = 2 and K = 4 , respectively.
As we know, the LMMN algorithm can be used as the traditional LMS algorithm for χ = 1 . Thus, we herein compare the channel estimation behavior with the LMS, ZA-LMS and RZA-LMS algorithms. Furthermore, the ZA-LMMN and RZA-LMMN are included to further verify the effectiveness and the superiority of the CIM-LMMN algorithm. In this experiment, the simulation parameters for the used algorithms are: μ LMS = μ = μ ZALMS = μ ZA = μ RZA = 0 . 004 , μ LMSF = 0 . 008 , μ RZA = 0 . 0038 , ρ ZALMS = ρ RZALMS = 5 × 10 5 , ρ ZA = ρ RZA = 3 × 10 5 , η = 0 . 004 , ρ CIM = 5 × 10 5 . The channel estimation behaviors for different sparsity levels, namely K = 1 , K = 2 and K = 4 , are reported in Figure 5, Figure 6 and Figure 7, respectively. When K = 1 , the steady-state error floor of our proposed CIM-LMMN algorithm is much lower than those of previously-reported algorithms. Such a low MSE is more and more significant for achieving good channel state information, which can potentially enhance the quality of the wireless communication system in practical engineering applications. With an increment of K ranging from K = 2 to K = 4 , the sparsity of the channel is reduced, and hence, the MSE floor is increased. However, our proposed CIM-LMMN algorithm still outperforms the previously-reported ZA-LMMN and RZA-LMMN algorithms for handling sparse channel estimation.
On the other hand, as for the LMMN algorithm, it can also be used as an LMF algorithm when χ is set to zero. To quantify the robust performance of the CIM-LMMN algorithm, the channel estimation is also investigated and compared with the LMF, ZA-LMF and RZA-LMF algorithms. Herein, to obtain nearly the same initial convergence rate, the parameters used in this experiment are set to be: μ LMSF = 0 . 005 , μ = μ ZA = 0 . 003 , μ ZALMF = μ RZALMF = 0 . 0048 , μ RZA = 0 . 004 , ρ ZALMF = ρ ZA = ρ RZALMSF = 3 × 10 5 , ρ RZA = 3 × 10 4 , η = 0 . 0044 , ρ CIM = 2 × 10 5 . The channel estimation performance compared with the LMF algorithms is shown in Figure 8, Figure 9 and Figure 10. As expected, the proposed CIM-LMMN algorithm is superior to the earlier reported LMF, LMS/F, LMMN, ZA-LMF, RZA-LMF, ZA-LMMN and RZA-LMMN algorithms. Furthermore, we can see that our proposed CIM-LMMN algorithm is better than the mentioned algorithms with respect to both the initial convergence and the steady-state performance. Compared to the RZA-LMMN algorithm, the convergence of the proposed CIM-LMMN algorithm is much faster for K = 1 , which is because the CIM-LMMN integrates a CIM penalty term to account for the non-zero channel coefficients. It is worth noting that the CIM-LMMN algorithm can achieve much gain in comparison with the RZA-LMMN for K = 8 without sacrificing the convergence.
From the above discussions and the previous studies [26,27], we know that the RZA-LMS/F algorithm is better than the RZA-LMS and RZA-LMF algorithms for sparse channel estimation. Moreover, the RZA-LMMN algorithm outperforms the ZA-LMMN and traditional LMMN algorithms for dealing with sparse signals, which has been investigated in [32]. Thus, we will investigate the tracking behavior of the proposed CIM-LMMN algorithm in comparison with the RZA-LMS/F and RZA-LMMN algorithms. In this simulation, the simulation parameters are set to obtain the desired final MSE of each algorithm with the same convergence rate at the initial stage, and these parameters are: N = 16 , μ RZALMSF = 0 . 065 , ρ RZALMSF = 9 × 10 6 , μ RZA = 0 . 0065 , ρ RZA = 1 × 10 4 , η = 0 . 0065 , ρ = 3 × 10 5 , σ = 0 . 01 . Herein, the signal-to-noise ratio is set to 30 dB. The simulation result is shown in Figure 11. It is found that the proposed algorithm can track the varying of the channel well, and it still outperforms the RZA-LMS/F and RZA-LMMN algorithms with respect to the MSE floor at the same initial convergence rate.
We next draw conclusions in terms of the considered channel estimation performance. The proposed CIM-LMMN algorithm can provide the fastest convergence rate and lowest channel estimation misalignment compared with traditional LMS/F, LMS, LMF and their related ZA algorithms. However, as the goal is to get both fast convergence and low misalignment, a novel technique named as CIM has been employed to utilize the sparsity property of the broadband multi-path wireless channel. This is because the CIM measure can account for the dominant channel taps and is used to force the non-dominant channel coefficients to zero quickly. Moreover, the ZA ability can be controlled by ρ CIM to provide good channel estimation performance. Thus, we can say that the CIM-LMMN algorithm is robust and effective for sparse adaptive channel estimation.

5. Conclusions

An enhanced CIM-LMMN algorithm has been proposed, and it has been used for broadband sparse multi-path wireless communication channel estimation applications. The proposed CIM-LMMN algorithm is realized by utilizing a CIM penalty within the kernel framework to modify the cost function so that it can give rise to a new ZA term in its iterations. The channel estimation performance of the CIM-LMMN algorithm has been discussed in the context of a sparse channel, and the simulation results showed that the CIM-LMMN algorithm can get at least a 0.5-dB gain in comparison with LMF, LMS, LMS/F and their sparse ZA algorithms. Therefore, we can say that the CIM-LMMN algorithm with low complexity is robust and effective for sparse channel estimation applications in practical engineering.

Acknowledgments

This work was partially supported by the Navy Defense Foundation of China (4010403020102), the National Natural Science Foundation of China (61571149), the Science and Technology innovative Talents Foundation of Harbin (2013RFXXJ083, 2016RAXXJ044), the Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province of China and the Foundational Research Funds for the Central Universities (HEUCFD1433, HEUCF160815 and 2662016PY123).

Author Contributions

Yingsong Li proposed the idea of the CIM-LMMN algorithm and provided the proofs of the coding. Zhan Jin carried out the code and did some of the simulations in this paper. Yanyan Wang performed some of the simulations and researched the background. Rui Yang provided some analysis on the proposed CIM-LMMN algorithm. The authors wrote this paper together, and they have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haykin, S. Adaptive Filter Theory, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1991. [Google Scholar]
  2. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Spring: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  3. Sayed, A.H. Fundamentals of Adaptive Filtering, 1st ed.; Wiley: Hoboken, NJ, USA, 2003; p. 1168. [Google Scholar]
  4. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP’09), Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128.
  5. Gu, Y.; Jin, J.; Mei, S. L0 norm constraint LMS algorithms for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  6. Taheri, O.; Vorobyov, S.A. Sparse channel estimation with Lp-norm and reweighted L1-norm penalized least mean squares. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2011), Prague, Czech Republic, 22–27 May 2011.
  7. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar]
  8. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  9. Candes, E.J.; Wakin, M.B.; Noyd, S.P. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  10. Li, Y.; Wang, Y.; Jiang, T. Sparse channel estimation based on a p-norm-like constrained least mean fourth algorithm. In Proceedings of the 2015 International Conference on Wireless Communications and Signal Processing (WCSP 2015), Nanjing, China, 15–17 October 2015.
  11. Gui, G.; Adachi, F. Improved least mean square algorithm with application to adaptive sparse channel estimation. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 204. [Google Scholar] [CrossRef]
  12. Li, Y.; Zhang, C.; Wang, S. Low-complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process. 2016, 35, 1611–1624. [Google Scholar] [CrossRef]
  13. Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on lp-norm-penalized affine projection algorithm. Int. J. Antennas Propag. 2014, 2014, 434659. [Google Scholar]
  14. Proakis, J.G. Digital Communications, 4th ed.; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
  15. Cotter, S.F.; Rao, B.D. Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 2002, 50, 374–377. [Google Scholar] [CrossRef]
  16. Ariyavisitakul, S.; Sollenberger, N.R.; Greenstein, L.J. Tap-selectable decision feedback equalization. In Proceedings of the 1997 IEEE International Conference on Communications, Towards the Knowledge Millennium (ICC’97), Montreal, QC, Canada, 12 June 1997.
  17. Vuokko, L.; Kolmonen, V.-M.; Salo, J.; Vainikainen, P. Measurement of large-scale cluster power characteristics for geometric channel models. IEEE Trans. Antennas Propag. 2007, 55, 3361–3365. [Google Scholar] [CrossRef]
  18. Korowajczuk, L. LTE, WiMAX and WLAN Network Design, Optimization and Performance Analysis; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  19. Schafhuber, D.; Matz, G.; Hlawatsch, F. Adaptive Wiener filters for time-varying channel estimation in wireless OFDM systems. In Proceedings of the 2003 IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP’03), Hong Kong, China, 6–10 April 2003.
  20. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  21. Deng, H.; Doroslovacki, M. Improving convergence of the PNLMS algorithm for sparse impulse response identification. IEEE Signal Process. Lett. 2005, 12, 181–184. [Google Scholar] [CrossRef]
  22. Li, Y.; Hamamura, M. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation. Sci. World J. 2014, 2014, 572969. [Google Scholar] [CrossRef] [PubMed]
  23. Li, Y.; Hamamura, M. Zero-attracting variable-step-size least mean square algorithms for adaptive sparse channel estimation. Int. J. Adapt. Control Signal Process. 2015, 29, 1189–1206. [Google Scholar] [CrossRef]
  24. Gui, G.; Adachi, F. Sparse least mean fourth algorithm for adaptive channel estimation in low signal-to-noise ratio region. Int. J. Commun. Syst. 2014, 27, 3147–3157. [Google Scholar] [CrossRef]
  25. Gui, G.; Xu, L.; Matsushita, S. Improved adaptive sparse channel estimation using mixed square/fourth error criterion. J. Frankl. Inst. 2015, 352, 4579–4594. [Google Scholar] [CrossRef]
  26. Gui, G.; Mehbodniya, A.; Adachi, F. Least mean square/fourth algorithm for adaptive sparse channel estimation. In Proceedings of the 2013 IEEE 24th International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013.
  27. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  28. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  29. Albu, F.; Gully, A.; de Lamare, R. Sparsity-aware pseudo affine projection algorithm for active noise control. In Proceedings of the 2014 Annual Summit and Conference on Asia-Pacific Signal and Information Processing Association (APSIPA), Chiang Mai, Thailand, 9–12 December 2014.
  30. Chambers, J.A.; Tanrikulu, O.; Constantinides, A.G. Least mean minxed-norm adaptive filtering. Electron. Lett. 1994, 30, 1574–1575. [Google Scholar] [CrossRef]
  31. Tanrikulu, O.; Chambers, J.A. Convergence and steady-state properties of the least-mean mixed-norm (LMMN) adaptive algorithm. IEE Proc. Vis. Image Signal Process. 1996, 143, 137–142. [Google Scholar] [CrossRef]
  32. Li, Y.; Wang, Y.; Jiang, T. Sparse least mean mixed-norm adaptive filtering algorithms for sparse channel estimation application. Int. J. Commun. Syst. 2016. [Google Scholar] [CrossRef]
  33. Chen, B.; Principe, J.C. Maximum correntropy estimation is a smoothed MAP estimation. IEEE Signal Process. Lett. 2012, 19, 491–494. [Google Scholar] [CrossRef]
  34. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  35. Wu, Z.; Peng, S.; Chen, B.; Zhao, H. Robust Hammerstein adaptive filtering under maximum correntropy criterion. Entropy 2015, 17, 7149–7166. [Google Scholar] [CrossRef]
  36. Huijse, P.; Estevez, P.A.; Zegers, P.; Principe, J.C.; Protopapas, P. Period estimation in astronomical time series using slotted correntropy. IEEE Signal Process. Lett. 2011, 18, 371–374. [Google Scholar] [CrossRef]
  37. Li, Y.; Wang, Y. Sparse SM-NLMS algorithm based on correntropy criterion. Electron. Lett. 2016, 52, 1461–1463. [Google Scholar] [CrossRef]
  38. Zhao, S.; Chen, B.; Principe, J.C. Kernel adaptive filtering with maximum correntropy criterion. In Proceedings of the 2011 International Joint Conference on Neural Networks (IJCNN), San Jose, CA, USA, 31 July–5 August 2011.
  39. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  40. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Principe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  41. Seth, S.; Principe, J.C. Compressed signal reconstruction using the correntropy induced metric. In Proceedings of the 2008 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP’08), Las Vegas, NV, USA, 31 March–4 April 2008.
Figure 1. Convergence comparisons of the proposed CIM-LMMN algorithm with previously-reported sparse channel estimation algorithms.
Figure 1. Convergence comparisons of the proposed CIM-LMMN algorithm with previously-reported sparse channel estimation algorithms.
Entropy 18 00380 g001
Figure 2. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 1 .
Figure 2. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 1 .
Entropy 18 00380 g002
Figure 3. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 2 .
Figure 3. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 2 .
Entropy 18 00380 g003
Figure 4. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 4 .
Figure 4. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS/F algorithm for K = 4 .
Entropy 18 00380 g004
Figure 5. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 1 .
Figure 5. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 1 .
Entropy 18 00380 g005
Figure 6. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 2 .
Figure 6. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 2 .
Entropy 18 00380 g006
Figure 7. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 4 .
Figure 7. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMS algorithm for K = 4 .
Entropy 18 00380 g007
Figure 8. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 1 .
Figure 8. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 1 .
Entropy 18 00380 g008
Figure 9. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 2 .
Figure 9. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 2 .
Entropy 18 00380 g009
Figure 10. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 4 .
Figure 10. Channel estimation behavior of the CIM-LMMN algorithm compared with the LMF algorithm for K = 4 .
Entropy 18 00380 g010
Figure 11. Tracking behavior of the proposed CIM-LMMN algorithm.
Figure 11. Tracking behavior of the proposed CIM-LMMN algorithm.
Entropy 18 00380 g011

Share and Cite

MDPI and ACS Style

Li, Y.; Jin, Z.; Wang, Y.; Yang, R. A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation. Entropy 2016, 18, 380. https://doi.org/10.3390/e18100380

AMA Style

Li Y, Jin Z, Wang Y, Yang R. A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation. Entropy. 2016; 18(10):380. https://doi.org/10.3390/e18100380

Chicago/Turabian Style

Li, Yingsong, Zhan Jin, Yanyan Wang, and Rui Yang. 2016. "A Robust Sparse Adaptive Filtering Algorithm with a Correntropy Induced Metric Constraint for Broadband Multi-Path Channel Estimation" Entropy 18, no. 10: 380. https://doi.org/10.3390/e18100380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop