Next Article in Journal
Food Safety Event Detection Based on Multi-Feature Fusion
Previous Article in Journal
Estimating the Entropy for Lomax Distribution Based on Generalized Progressively Hybrid Censoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems

1
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
Key Laboratory of Microwave Remote Sensing, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(10), 1218; https://doi.org/10.3390/sym11101218
Submission received: 20 August 2019 / Revised: 22 September 2019 / Accepted: 25 September 2019 / Published: 1 October 2019

Abstract

:
A sparsity-aware variable kernel width proportionate affine projection (AP) algorithm is devised for identifying sparse system in impulsive noise environments. For the devised algorithm, the symmetry maximum correntropy criterion (MCC) is employed to develop a new cost function for improving the PAP algorithm, then the variable kernel width and the l p -norm-like constraint are incorporated into the cost-function, which is named as l p -norm variable kernel width proportionate affine projection (LP-VPAP) algorithm. The devised LP-VPAP algorithm is investigated and verified under impulsive interference environments. Experimental results show that the LP-VPAP gets a faster convergence and provides a lower steady-state performance compared with AP, zero-attracting AP (ZA-AP), reweighted ZA-AP (RZA-AP), proportionate AP (PAP), MCC, variable kernel width MCC (VKW-MCC), and proportionate AP MCC (PAPMCC) algorithms.

1. Introduction

Today, with the evolution of information technology, adaptive filtering (AF) algorithms are used in wireless communication, noise reduction, system identification (SI), and automatic control systems [1,2,3,4,5,6]. The famed least-mean-square (LMS) is one of the classic algorithms and is generally used [7,8]. Then, the normalized version of the LMS (NLMS) is proposed to improve the stability as well as increase the convergence performance [9]. However, the LMS-like algorithms encounter poor performance while the input is a strongly correlated signal. On the basis of the NLMS, the affine projection (AP) is developed by reusing input signal [10,11], which is regarded as a multi-order generalization of the NLMS algorithm, reusing the sample values of the current time and the previous time of the input signal. Compared with the LMS and NLMS, the behavior of the AP algorithms is more outstanding, especially for the colored inputs [12].
Recently, scholars have found that sparse systems are widespread in nature [13,14,15,16,17,18]. For example, in underwater communication systems, the underwater acoustic channel exhibits strong sparsity [19]. In network communications, the network echo channel also exhibits sparsity. [20]. However, the traditional LMS, NLMS and AP algorithms show great potential that can be further improved to use the sparse characteristic in the system’s impulse response (IR). To fully use the sparse characteristic, the literature [21] has proposed the famed proportionate NLMS (PNLMS) algorithm by introducing a proportionate update technique to reasonably allocate the step sizes corresponding to each filter tap coefficient. Inspired by the PNLMS, the proportionate AP (PAP) algorithm was proposed by using the idea in PNLMS to fully use the sparsity in the system via employing the data reusing principle [22]. Then, various proportionate-type AF algorithms were proposed and analyzed [23,24,25]. Moreover, a collection of zero-attraction (ZA) algorithms, such as the ZA-LMS and its reweighted form (RZA-LMS), ZA-AP, and RZA-AP algorithms, etc. [26,27,28,29,30,31,32,33], are developed based on the concept of compressed sensing (CS) theory [34] for sparse SI.
In many engineering fields, noise often exhibits strongly impulsive characteristics [35]. Traditional LMS-type and AP-type algorithms, which use the E e 2 ( n ) (where e ( n ) is the error signal, and E · represents the expectation operator) to construct an expected cost-function, will encounter poor performance in impulsive noise environments. To find out the solution for handling these problems, the maximum correntropy criterion (MCC) was proposed to give resistance to impulse-noise [36,37]. Then, a series of AF algorithms based on MCC were proposed to resist impulse-noise [38,39,40,41,42,43,44,45,46,47,48]. The variable kernel width MCC (VKW-MCC) uses a variable kernel width technique to enhance the identification ability of the famed MCC [49]. The proportionate affine projection symmetry maximum correntropy criterion (PAPMCC) uses the MCC and the data reusing technique to enhance the robustness of the PAP [50].
The MCC criterion is used in this paper to build a new cost function to improve PAP algorithm, then integrate the variable kernel width technique and the l p -norm-like constraint into the cost-function to develop the l p -norm variable kernel width proportionate affine projection (LP-VPAP). The devised LP-VPAP algorithm is investigated and verified under impulsive noise environments. Experimental results show that the LP-VPAP converges the fastest and gets the lowest steady state misalignment compared with AP, ZA-AP, RZA-AP, PAP, MCC, VKW-MCC, and PAPMCC algorithms.

2. Previous Work of the MCC and AP Algorithms

2.1. The Basic MCC Algorithm

In the range of AF, the implementation schematic diagram for SI is presented in Figure 1.
x ( n ) = [ x ( n ) , x ( n 1 ) , , x ( n L + 1 ) ] T is used as system input signal, and the IR is modeled as h ( n ) = h 0 ( n ) , , h L 1 ( n ) T with L elements, and n represents the time index. The achieved signal d ( n ) is
d ( n ) = x T ( n ) h ( n ) + r ( n ) ,
in which r ( n ) denotes the additive interferences. The output signal is defined as
y ( n ) = x T ( n ) h ^ ( n ) ,
in which h ^ ( n ) stands for the estimation vector of IR. e ( n ) is the estimated error written as
e ( n ) = d ( n ) y ( n ) .
The famed MCC aims to solve the following cost-function
J MCC ( n ) = E exp e 2 ( n ) 2 σ 2 ,
where σ represents Gaussian kernel. Removing expectation operator and computing the gradient of Equation (4), the iterative formula of the MCC is
h ^ ( n + 1 ) = h ^ ( n ) + μ MCC exp ( e 2 ( n ) 2 σ 2 ) e ( n ) x ( n ) ,
in which μ MCC is the step size.

2.2. AP Algorithm

The AP algorithm reuses the information from previous instants to accelerate the convergence, particularly for colored input. The input data of the AP is expressed as an L × P matrix
X ( n ) = [ x ( n ) , x ( n 1 ) , x ( n 2 ) , , x ( n P + 1 ) ] .
Herein, P denotes the order of the projection. The reference signal in the AP algorithm is d ( n ) = [ d ( n ) , d ( n 1 ) , d ( n 2 ) , , d ( n P + 1 ) ] T . Then, the output is
y ( n ) = X T ( n ) h ^ ( n )
with the error vector
e ( n ) = d ( n ) y ( n ) .
The iterative formula of the famous AP is denoted as
h ^ ( n ) = h ^ ( n 1 ) + μ AP X ( n ) X T ( n ) X ( n ) + δ AP I P 1 e ( n ) ,
in which μ AP represents the total step size, δ AP > 0 has a small value, and I P is a P × P identity matrix.

3. The Developed LP-VPAP Algorithm

Herein, we detailedly analyze the developed LP-VPAP algorithm, which is implemented by the variable kernel width technique and the l p -norm-like constraint to devise a novel cost-function. In addition, it can be summarized as the following problem:
1 2 h ^ ( n + 1 ) h ^ ( n ) K 1 ( n ) 2 + ρ K 1 ( n ) h ^ ( n + 1 ) p p subject to e ( n ) = 1 P ξ exp e ( n ) e ( n ) 2 σ 2 e ( n )
where e (n) = d(n) − XT(n) h ^ (n + 1), h ^ ( n + 1 ) p p = i = 0 L | h i | p , 1P is a P × 1 vector with all the elements equal to 1, ξ denotes the step size, and e(n) ⊙ e(n) is the Hadamard product. K(n) s a gain assignment matrix which allots different gain to each coefficient [21]:
K ( n ) = diag k 0 ( n ) , k 1 ( n ) , , k L 1 ( n ) .
The individual gain k l ( n ) is
k l ( n ) = φ l ( n ) i = 0 L 1 φ i ( n 1 ) ,
with
φ l ( n 1 ) = max α q , h ^ l , q = max β , h ^ 0 , h ^ 1 , h ^ 2 , , h ^ L 1 .
Parameters α and β can maintain coefficient updating from halting. Employing the Lagrange multiplier (LM) method, the cost-function of the developed LP-VPAP is
J ( h ^ ( n + 1 ) ) = 1 2 h ^ ( n + 1 ) h ^ ( n ) K 1 ( n ) 2 + ρ K 1 ( n ) h ^ ( n + 1 ) p p         + λ e ( n ) 1 P ξ exp e ( n ) e ( n ) 2 σ 2 e ( n )
in which λ = [ λ 0 , λ 1 , , λ P 1 ] . Then, the gradients of J ( h ^ ( n + 1 ) ) with respect to h ^ ( n + 1 ) and λ are calculated, respectively.
J ( h ^ ( n + 1 ) ) h ^ ( n + 1 ) = K 1 ( n ) h ^ ( n + 1 ) h ^ ( n ) + ρ p K 1 ( n ) sgn ( h ^ ( n + 1 ) ) ε + h ^ ( n + 1 ) 1 p X ( n ) λ T
and
J ( h ^ ( n + 1 ) ) λ = e ( n ) 1 P ξ exp e ( n ) e ( n ) 2 σ 2 e ( n ) .
Assume that h ^ ( n + 1 ) h ^ ( n ) and let
J ( h ^ ( n + 1 ) ) h ^ ( n + 1 ) = 0 J ( h ^ ( n + 1 ) ) λ = 0 .
Then, one can obtain
h ^ ( n + 1 ) = h ^ ( n ) + K ( n ) X ( n ) λ T ρ p sgn ( h ^ ( n ) ) ε + h ^ ( n ) 1 p ,
where ε is a small positive constant which aims to prevent division by 0 and
d ( n ) = X T ( n ) h ^ ( n + 1 ) + 1 P ξ exp e ( n ) e ( n ) 2 σ 2 e ( n ) .
From Equations (18) and (19), we have
λ T = ξ X T ( n ) K ( n ) X ( n ) 1 exp e ( n ) e ( n ) 2 σ 2 e ( n ) + ρ X T ( n ) K ( n ) X ( n ) 1 X T ( n ) p sgn ( h ^ ( n ) ) ε + h ^ ( n ) 1 p
Then, the iterative formula is denoted as
h ^ ( n + 1 ) = h ^ ( n ) + ξ K ( n ) X ( n ) X T ( n ) K ( n ) X ( n ) 1 exp e ( n ) e ( n ) 2 σ 2 e ( n ) + ρ K ( n ) X ( n ) X T ( n ) K ( n ) X ( n ) 1 X T ( n ) p sgn ( h ^ ( n ) ) ε + h ^ ( n ) 1 p ρ p sgn ( h ^ ( n ) ) ε + h ^ ( n ) 1 p
In practice, the third term in Equation (21) plays a minor role and hence it can be ignored here [51,52]. To enhance the behavior of correntropy-based algorithm, a variable kernel width technique is used to automatically adjust the σ in Equation (4). Suppose that there is no impulse-interference. After obtaining e ( n ) , the kernel-width which yields the error with the maximum damping along the direction of gradient-ascent is solved by
max σ ( n ) J MCC ( n ) = exp e 2 ( n ) 2 σ 2 ( n ) .
For Equation (22), calculate its derivative in terms of e ( n ) , then let the derivative be zero, we obtain
σ ( n ) = e ( n ) σ ( n ) .
Solving homogeneous differential Equation (23), we can get
σ ( n ) = θ e ( n ) .
σ ( n ) represents the optimal kernel-width, and θ denotes a positive constant. In practical terms, the received signal could contain impulse-interferences. To get the robustness, a observed sliding window of e ( n ) is used to optimize σ ( n ) .
σ ( n ) = min ( σ 0 , θ e ¯ ( n ) ) ,
in which σ 0 represents a kernel bound, and e ¯ ( n ) is denoted as
e ¯ ( n ) = χ e ¯ ( n 1 ) + ( 1 χ ) min e ( n ) , e ( n 1 ) , , e ( n N 0 + 1 ) ,
in which 0 χ < 1 denotes a forgetting factor, and N 0 represents the observation length. Then, Equation (21) is changed to be
h ^ ( n + 1 ) = h ^ ( n ) + ξ K ( n ) X ( n ) X T ( n ) K ( n ) X ( n ) + δ PAP I P 1 exp e ( n ) e ( n ) 2 σ 2 ( n ) e ( n ) ρ p sgn ( h ^ ( n ) ) ε + h ^ ( n ) 1 p
Equation (27) is the final iterative formula for the developed LP-VPAP algorithm.
The computation complexity of the LP-VPAP is compared with MCC, VKW-MCC, AP, ZA-AP, RZA-AP, PAP, and PAPMCC algorithms with respect to the total number of addition, multiplication, and division in each iteration. The comparison is presented in Table 1. It is clear to see that the LP-VPAP algorithm has a modest increase in computational complexity compared with that of the PAPMCC algorithm.

4. Performance Analysis

Herein, several examples are presented to investigate the performance of the devised LP-VPAP with the framework of SI. The background noise r ( n ) consists of the impulsive noise i ( n ) and white-Gaussian-noise (WGN) v ( n ) , where i ( n ) is modeled by the Bernoulli distribution for i ( n ) = b ( n ) g ( n ) , g ( n ) is another WGN and b ( n ) is Bernoulli-process for P ( b ( n ) = 1 ) = 0.1 . The signal-noise-ratio (SNR) and signal-interface-ratio (SIR), which are defined by SNR = 10 log 10 δ x 2 δ v 2 and SIR = 10 log 10 δ x 2 δ i 2 (where δ x 2 , δ v 2 , and δ i 2 represent the variances of x ( n ) , v ( n ) , and i ( n ) , respectively), are set as 30 dB and 10 dB, respectively. In all experiments, L = 1024 and P = 4 are selected. Regularization parameters are set to be δ AP = δ ZA AP = δ RZA AP and δ LP VPAP = δ PAP = 1 L δ AP [53]. The kernel width of the MCC is set as 1, and the observation length of VKW-MCC and LP-VPAP algorithms is set to be 25. The network echo channel used for the simulation, which is a classical sparse channel presented in Figure 2, whose active coefficients distributed in [257,272], is considered to evaluate the proposed LP-VPAP. The behavior of the LP-VPAP is evaluated by normalized-misalignment (NM) that has a definition of 10 log 10 ( h h ^ 2 2 / h 2 2 ) .

4.1. Performance of the LP-VPAP with Different p and ρ

Firstly, the effects of p and ρ on the behavior for devised LP-VPAP algorithm is investigated. The colored noise (CN), which is obtained from WGN filtering through an autoregressive with a pole at 0.8, is used as the input. Herein, ξ = 0.225 . The results given in Figure 3 illustrate that when p is selected to 0.5 or 0.3, the LP-VPAP algorithm achieves the fastest convergence.
Secondly, the effects of ρ on the convergence for the LP-VPAP is analyzed and discussed. Herein, p = 0.5 is selected. The results given in Figure 4 illustrate that the NM of the LP-VPAP algorithm increases with the increment of ρ , but the convergence rate becomes faster. Therefore, a balance between convergence and NM should be taken into consideration.

4.2. Performance Comparisons of the LP-VPAP Algorithm under Different Input Signals

According to the simulation above, we found that the devised LP-VPAP obtains better performance when p = 0.5 and ρ = 4 × 10 7 are selected. Then, the identification behavior of the LP-VPAP is compared with AP, ZA-AP, RZA-AP, PAP, MCC, VKW-MCC, and PAPMCC algorithms with WGN and CN input. The step-sizes for other algorithms are presented as follows: μ AP = μ ZA AP = μ RZA AP = 0.025 , μ PAP = 0.005 , μ MCC = 0.0006 , μ VKW MCC = 0.00175 , and μ PAPMCC = 0.015 . The performance comparisons of the LP-VPAP with various inputs are presented in Figure 5 and Figure 6, respectively. It is clear to see that the LP-VPAP algorithm achieves the lowest NM and fast convergence rate. When the input is CN, the convergence speed of the MCC and VKW-MCC becomes much slower, while the proposed LP-VPAP is still better than all the related algorithms by considering the convergence and identification error.

4.3. Tracking Behavior of the LP-VPAP

Herein, the tracking behavior of the LP-VPAP is investigated by two different systems. The first system, whose dominant taps distributed in [257,272], is presented in Figure 7a, and the second system, whose dominant taps distributed in [769,784] and [257,272], is presented in Figure 7b. Other parameters are consistent with those in the previous subsection. Simulation results under different input signal are presented in Figure 8 and Figure 9. It is clear to see that the LP-VPAP tracks the two different sparse systems, and converges the fastest and achieves the lowest NM. In the comparisons, only the sparsity-aware algorithms like ZA-AP, RZA-AP, PAPMCC and the proposed LP-VPAP algorithms will lead to estimation error increase. In Figure 8 and Figure 9, we find this result since our proposed LP-VPAP algorithm is also a sparsity-aware algorithm. Thus, when the system is getting less sparse, our proposed LP-VPAP algorithm will result in a high estimation error like the ZA-AP, RZA-AP and PAPMCC algorithms. However, the LP-VPAP algorithm still provides the lowest estimation error.

4.4. Performance Comparisons of the LP-VPAP Algorithm with a Less Sparse Echo Path

In this subsection, the performance of the LP-VPAP is investigated using CN input with a less sparse echo path whose dominant taps are distributed in [257,384]. Herein, ρ = 5 × 10 8 is selected. Other parameters are the same as those in the previous simulations. Simulation results are presented in Figure 10. It is clear to see that although the steady-state error of the devised LP-VPAP increases, it is still better than the related algorithms for providing the fastest convergence speed.

5. Conclusions

In this paper, a sparsity-aware variable kernel width PAP is devised for sparse SI under impulsive noise environments. The developed LP-VPAP is realized by developing the variable kernel width technique and the l p -norm-like constraint. The key parameters are analyzed to discuss the performance of the developed LP-VPAP. Simulation results show the LP-VPAP can speed up the convergence and improve the estimation accuracy compared with AP, ZA-AP, RZA-AP, PAP, MCC, VKW-MCC, and PAPMCC algorithms.

Author Contributions

Z.J. (Zhengxiong Jiang) wrote the draft, programmed the code and finished the simulations. Y.L. put forward the LP-VPAP algorithm and checked the codes. X.H. and Z.J. (Zhan Jin) provided some useful analysis for LP-VPAP algorithm. All authors approved the contents of this paper.

Funding

This paper is supported by the National Key Research and Development Program of China (2016YFE0111100), Key Research and Development Program of Heilongjiang (GX17A016), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), the Natural Science Foundation of Beijing (4182077), Natural Science Foundation of Heilongjiang Province, China (F2017004), China Postdoctoral Science Foundation (2017M620918 and 2019T120134), the Fundamental Research Funds for the Central Universities (HEUCFG201829 and 2072019CFG0801), and Opening Fund of Acoustics Science and Technology Laboratory (SSKF2016001).

Conflicts of Interest

There is no conflict of interest.

References

  1. Ljung, L. System Identification: Theory for the User; Prentice-Hall: Upper Saddle River, NJ, USA, 1987. [Google Scholar]
  2. Benesty, J.; Gänsler, T.; Morgan, D.R.; Sondhi, M.M.; Gay, S.L. Advances in Network and Acoustic Echo Cancellation; Springer: Berlin, Germany, 2001. [Google Scholar]
  3. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU-Int. J. Electron. Commun. 2006, 70, 895–902. [Google Scholar] [CrossRef]
  4. Li, Y.; Wang, Y.; Yang, R.; Ablu, F. A soft parameter function penalized normalized maximum correntropy criterion algorithm for sparse system identification. Entropy 2017, 19, 45. [Google Scholar] [CrossRef]
  5. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  6. Li, Y.; Jiang, Z.; Jin, Z.; Han, X.; Yin, J. Cluster-sparse proportionate NLMS algorithm with the hybrid norm constraint. IEEE Access 2018, 6, 47794–47803. [Google Scholar] [CrossRef]
  7. Haykin, S. Adaptive Filter Theory, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1991. [Google Scholar]
  8. Widrow, B.; Stearns, S.D. Adaptive Signal Processing; Prentice-Hall: Upper Saddle River, NJ, USA, 1985. [Google Scholar]
  9. Douglas, S.C.; Meng, T.Y. Normalized data nonlinearities for LMS adaptation. IEEE Trans. Signal Process. 1994, 42, 1352–1365. [Google Scholar] [CrossRef]
  10. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electr. Commun. Jpn. 1984, 67, 19–27. [Google Scholar] [CrossRef]
  11. Shin, H.; Sayed, A.H.; Song, W. Variable step-size NLMS and affine projection algorithms. Electr. Commun. Jpn. 2004, 11, 132–135. [Google Scholar] [CrossRef]
  12. Li, Y.; Jiang, Z.; Osman, O.M.O.; Han, X.; Yin, J. Mixed norm constrained sparse APA algorithm for satellite and network echo channel estimation. IEEE Access 2018, 6, 65901–65908. [Google Scholar] [CrossRef]
  13. Etter, D. Identification of sparse impulse response systems using an adaptive delay filter. In Proceedings of the 1985 IEEE International Conference on Acoustics, Speech and Signal Processing, Tampa, FL, USA, 26–29 April 1985; pp. 1169–1172. [Google Scholar]
  14. Li, W.; Preisig, J.C. Estimation of rapidly time-varying sparse channels. IEEE J. Ocean. Eng. 2007, 32, 927–939. [Google Scholar] [CrossRef]
  15. Li, Y.; Wang, Y.; Jiang, T. Sparse least mean mixed-norm adaptive filtering algorithms for sparse channel estimation applications. Int. J. Commun. Syst. 2017, 30, 1–14. [Google Scholar] [CrossRef]
  16. Li, Y.; Zhang, C.; Wang, S. Low-complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process. 2016, 35, 1–14. [Google Scholar] [CrossRef]
  17. Shi, W.; Li, Y.; Zhao, L.; Liu, X. Controllable sparse antenna array for adaptive beamforming. IEEE Access 2019, 7, 6412–6423. [Google Scholar] [CrossRef]
  18. Zhang, X.; Jiang, T.; Li, Y.; Zakharov, Y. A novel block sparse reconstruction method for DOA estimation with unknown mutual coupling. IEEE Commun. Lett. 2019. [Google Scholar] [CrossRef]
  19. Berger, C.R.; Zhou, S.; Preisig, J.C.; Willett, P. Sparse channel estimation for multicarrier underwater acoustic communication: From subspace methods to compressed sensing. IEEE Trans. Signal Process. 2010, 58, 1708–1721. [Google Scholar] [CrossRef]
  20. Gänsler, T.; Gay, S.L.; Sondhi, M.M.; Benesty, J. Double-talk robust fast converging algorithms for network echo cancellation. IEEE Trans. Speech Audio Process. 2000, 8, 656–663. [Google Scholar] [CrossRef]
  21. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  22. Gansler, T.; Benesty, J.; Gay, S.L.; Sondhi, M.M. A robust proportionate affine projection algorithm for network echo cancellation. In Proceedings of the 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, 5–9 June 2000; pp. 793–796. [Google Scholar]
  23. Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002; pp. 1881–1884. [Google Scholar]
  24. Jin, Z.; Li, Y.; Wang, Y. An enhanced set-membership PNLMS algorithm with a correntropy induced metric constraint for acoustic channel estimation. Entropy 2017, 19, 281. [Google Scholar] [CrossRef]
  25. Deng, H.; Doroslovački, M. Improving convergence of the PNLMS algorithm for sparse impulse response identification. IEEE Signal Process. Lett. 2005, 12, 181–184. [Google Scholar] [CrossRef]
  26. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
  27. Gu, Y.; Jin, J.; Mei, S. L0 norm constraint LMS for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  28. Su, G.; Jin, J.; Gu, Y.; Wang, J. Performance analysis of l0 norm constraint least mean square algorithm. IEEE Trans. Signal Process. 2012, 60, 2223–2235. [Google Scholar] [CrossRef]
  29. Jiang, S.; Gu, Y. Block-sparsity-induced adaptive filter for multi-clustering system identification. IEEE Trans. Signal Process. 2015, 63, 5183–5330. [Google Scholar] [CrossRef]
  30. Jin, J.; Qu, Q.; Gu, Y. Robust zero-point attraction least mean square algorithm on near sparse system identification. IET Signal Process. 2013, 7, 210–218. [Google Scholar] [CrossRef]
  31. Meng, R.; de Lamare, R.C.; Nascimento, V.H. Nascimento. Sparsity-aware affine projection adaptive algorithms for system identification. In Proceedings of the Sensor Signal Processing for Defence (SSPD 2011), London, UK, 27–29 September 2011; pp. 793–796. [Google Scholar]
  32. Yu, Y.; Zhao, H.; Chen, B. Sparseness-controlled proportionate affine projection sign algorithms for acoustic echo cancellation. Circuits Syst. Signal Process. 2015, 34, 3933–3948. [Google Scholar] [CrossRef]
  33. Ma, W.; Chen, B.; Qu, H.; Zhao, J. Sparse least mean p-power algorithms for channel estimation in the presence of impulsive noise. Signal Image Video Process. 2016, 10, 503–510. [Google Scholar] [CrossRef]
  34. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  35. Vega, L.R.; Rey, H.; Benesty, J.; Tressens, S. A new robust variable step-size NLMS algorithm. IEEE Trans. Signal Process. 2008, 56, 1878–1893. [Google Scholar] [CrossRef]
  36. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Príncipe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef]
  37. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Príncipe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  38. Ma, W.; Qu, H.; Gui, G.; Xu, L.; Zhao, J.; Chen, B. Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments. J. Frankl. Inst.-Eng. Appl. Math. 2015, 352, 2708–2727. [Google Scholar] [CrossRef] [Green Version]
  39. Li, Y.; Jiang, Z.; Shi, W.; Han, X.; Chen, B. Blocked maximum correntropy criterion algorithm for cluster-sparse system identifications. IEEE Trans. Circuits Syst. II-Express Briefs 2019. [Google Scholar] [CrossRef]
  40. Shi, W.; Li, Y.; Wang, Y. Noise-free maximum correntropy criterion algorithm in non-Gaussian environment. IEEE Trans. Circuits Syst. II-Express Briefs 2019. [Google Scholar] [CrossRef]
  41. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Príncipe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  42. Chen, B.; Príncipe, J.C. Maximum correntropy estimation is a smoothed MAP estimation. IEEE Signal Process. Lett. 2012, 19, 491–494. [Google Scholar] [CrossRef]
  43. Wu, Z.; Shi, J.; Zhang, X.; Ma, W.; Chen, B. Kernel recursive maximum correntropy. Signal Process. 2015, 117, 11–16. [Google Scholar] [CrossRef]
  44. Chen, B.; Wang, X.; Lu, N.; Wang, S.; Cao, J.; Qin, J. Mixture correntropy for robust learning. Pattern Recognit. 2018, 79, 318–327. [Google Scholar] [CrossRef]
  45. Wang, F.; He, Y.; Wang, S.; Chen, B. Maximum total correntropy adaptive filtering against heavy-tailed noises. Signal Process. 2017, 141, 84–95. [Google Scholar] [CrossRef]
  46. Wang, S.; Dang, L.; Chen, B.; Duan, S.; Wang, L.; Chi, K.T. Random fourier filters under maximum correntropy criterion. IEEE Trans. Circuits Syst. I-Regul. Pap. 2018, 65, 3390–3403. [Google Scholar] [CrossRef]
  47. Qian, G.; Wang, S. Generalized complex correntropy: Application to adaptive filtering of complex data. IEEE Access 2018, 6, 19113–19120. [Google Scholar] [CrossRef]
  48. Wang, S.; Dang, L.; Wang, W.; Qian, G.; Chi, K.T. Kernel adaptive filters with feedback based on maximum correntropy. IEEE Access 2018, 6, 10540–10552. [Google Scholar] [CrossRef]
  49. Huang, F.; Zhang, J.; Zhang, S. Adaptive filtering under a variable kernel width maximum correntropy criterion. IEEE Trans. Circuits Syst. II-Express Briefs 2017, 64, 1247–1251. [Google Scholar] [CrossRef]
  50. Jiang, Z.; Li, Y.; Hunag, X. A correntropy-based proportionate affine projection algorithm for estimating sparse channels with impulsive noise. Entropy 2019, 21, 555. [Google Scholar] [CrossRef]
  51. Das, R.L.; Chakraborty, M. Improving the performance of the PNLMS algorithm esing l1 norm regularization. IEEE Trans. Audio Speech Lang. Process. 2016, 24, 1280–1290. [Google Scholar] [CrossRef]
  52. Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on lp-norm-penalized affine projection algorithm. Int. J. Antennas Propag. 2014, 2014, 1–8. [Google Scholar]
  53. Benesty, J.; Paleologu, C.; Ciochina, S. On regularization in adaptive filtering. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 1734–1742. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of SI.
Figure 1. The schematic diagram of SI.
Symmetry 11 01218 g001
Figure 2. The IR used in the examples.
Figure 2. The IR used in the examples.
Symmetry 11 01218 g002
Figure 3. Effects of p on the devised LP-VPAP.
Figure 3. Effects of p on the devised LP-VPAP.
Symmetry 11 01218 g003
Figure 4. Effects of ρ on the devised LP-VPAP.
Figure 4. Effects of ρ on the devised LP-VPAP.
Symmetry 11 01218 g004
Figure 5. Performance comparisons of the devised LP-VPAP. Input: WGN.
Figure 5. Performance comparisons of the devised LP-VPAP. Input: WGN.
Symmetry 11 01218 g005
Figure 6. Performance comparisons of the devised LP-VPAP. Input: CN.
Figure 6. Performance comparisons of the devised LP-VPAP. Input: CN.
Symmetry 11 01218 g006
Figure 7. The impulse response used in simulation below. (a) One-cluster system; (b) Two-cluster system.
Figure 7. The impulse response used in simulation below. (a) One-cluster system; (b) Two-cluster system.
Symmetry 11 01218 g007
Figure 8. Tracking performance of the devised LP-VPAP. Input: WGN.
Figure 8. Tracking performance of the devised LP-VPAP. Input: WGN.
Symmetry 11 01218 g008
Figure 9. Tracking performance of the devised LP-VPAP. Input: CN.
Figure 9. Tracking performance of the devised LP-VPAP. Input: CN.
Symmetry 11 01218 g009
Figure 10. Performance comparisons of the LP-VPAP algorithm with a less sparse echo path.
Figure 10. Performance comparisons of the LP-VPAP algorithm with a less sparse echo path.
Symmetry 11 01218 g010
Table 1. Computational Complexity in Each Iteration.
Table 1. Computational Complexity in Each Iteration.
AlgorithmAdditionMultiplicationDivision
MCC 2 L 2 L + 5 1
VKW-MCC 2 L + 2 2 L + 8 1
AP ( 2 P 2 + P ) L ( 2 P 2 + 3 P ) L + P 2 0
ZA-AP ( 2 P 2 + P + 1 ) L ( 2 P 2 + 3 P + 1 ) L + P 2 0
RZA-AP ( 2 P 2 + P + 2 ) L ( 2 P 2 + 3 P + 2 ) L + P 2 L
PAP 2 P L 2 + ( 2 P 2 P + 1 ) L 1 2 P L 2 + ( 2 P 2 + 3 P + 1 ) L + P 2 L
PAPMCC 2 P L 2 + ( 2 P 2 P + 1 ) L 1 2 P L 2 + ( 2 P 2 + 3 P + 1 ) L + P 2 + 2 P L + P
LP-VPAP 2 P L 2 + ( 2 P 2 P + 3 ) L + 1 2 P L 2 + ( 2 P 2 + 3 P + 2 ) L + P 2 + 2 P + 4 2 L + P

Share and Cite

MDPI and ACS Style

Jiang, Z.; Li, Y.; Huang, X.; Jin, Z. A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems. Symmetry 2019, 11, 1218. https://doi.org/10.3390/sym11101218

AMA Style

Jiang Z, Li Y, Huang X, Jin Z. A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems. Symmetry. 2019; 11(10):1218. https://doi.org/10.3390/sym11101218

Chicago/Turabian Style

Jiang, Zhengxiong, Yingsong Li, Xinqi Huang, and Zhan Jin. 2019. "A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems" Symmetry 11, no. 10: 1218. https://doi.org/10.3390/sym11101218

APA Style

Jiang, Z., Li, Y., Huang, X., & Jin, Z. (2019). A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems. Symmetry, 11(10), 1218. https://doi.org/10.3390/sym11101218

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop