Next Article in Journal
A Few Similarity Measures on the Class of Trapezoidal-Valued Intuitionistic Fuzzy Numbers and Their Applications in Decision Analysis
Previous Article in Journal
Study of Random Walk Invariants for Spiro-Ring Network Based on Laplacian Matrices
Previous Article in Special Issue
TabFedSL: A Self-Supervised Approach to Labeling Tabular Data in Federated Learning Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Bias Compensation Method for Sparse Normalized Quasi-Newton Least-Mean with Variable Mixing-Norm Adaptive Filtering

1
Department of Electrical Engineering, National Ilan University, Yilan 260007, Taiwan
2
College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(9), 1310; https://doi.org/10.3390/math12091310
Submission received: 3 April 2024 / Revised: 21 April 2024 / Accepted: 23 April 2024 / Published: 25 April 2024
(This article belongs to the Special Issue Advanced Research in Data-Centric AI)

Abstract

:
Input noise causes inescapable bias to the weight vectors of the adaptive filters during the adaptation processes. Moreover, the impulse noise at the output of the unknown systems can prevent bias compensation from converging. This paper presents a robust bias compensation method for a sparse normalized quasi-Newton least-mean (BC-SNQNLM) adaptive filtering algorithm to address these issues. We have mathematically derived the biased-compensation terms in an impulse noisy environment. Inspired by the convex combination of adaptive filters’ step sizes, we propose a novel variable mixing-norm method, BC-SNQNLM-VMN, to accelerate the convergence of our BC-SNQNLM algorithm. Simulation results confirm that the proposed method significantly outperforms other comparative works regarding normalized mean-squared deviation (NMSD) in the steady state.

1. Introduction

Adaptive filtering algorithms play a pivotal role in signal processing, encompassing tasks such as system identification, channel estimation, feedback cancellation, and noise removal [1]. While literature commonly assumes a Gaussian distribution for system noise, real-world scenarios, including underwater acoustics [2,3,4,5], low-frequency atmospheric disturbances [6], and artificial interference [7,8,9], often exhibit sudden changes in signal or noise intensity [10]. These abrupt variations can disrupt algorithms, serving as external solid interference or outliers [11,12].
Recently, the sparse quasi-Newton least-mean mixed-norm (SQNLMMN) algorithm [13] has emerged as a potential solution to mitigate the impact of both Gaussian and non-Gaussian noises on the convergence behavior of adaptive algorithms [14]. This algorithm introduces a novel cost function incorporating a linear combination of L 2 and L 4 norms while promoting sparsity. Despite its promise, the SQNLMMN algorithm exhibits specific weaknesses. Firstly, it overlooks the presence of input noise at the adaptive filter inputs, leading to biased coefficient estimates [15,16]. Secondly, the fixed mixing parameter δ , governing the balance between the two norms, fails to adapt dynamically. This rigidity in parameter choice trades off convergence rate and mean squared deviation (MSD) concerning the weight coefficients. Notably, the approach of employing a mixed step size for least mean fourth (LMF) and normalized LMF algorithms [17] to address such trade-offs [18] differs from the concept of a variable mixing parameter [19,20].
Based on the unbiased criteria, several methods have been reported to compensate the biases caused by the noisy inputs, such as the bias compensation least mean square algorithm (BC-LMS) [15,21], bias compensation normalized LMS algorithm (BC-NLMS) [22], bias compensation proportional normalized least mean square algorithm (BC-PNLMS) [23], bias compensation normalized least mean fourth algorithm (BC-NLMF) [24], bias compensated affine-projection-like (BC-APL) [25], and bias compensation least mean mixed norm algorithm (BC-LMMN) [26]. However, the BC-LMMN algorithm used a fixed mixing factor, which resulted in a higher misadjustment. In [27], the authors proposed using a biased-compensated generalized mixed norm algorithm and cooperating with correntropy-induced metric (CIM) as the sparse penalty constraint for sparse system identification problems. Unlike the conventional mixed norm approach, they mixed norm with p = 1.1 and q = 1.2 to better combat non-Gaussian noise as well as impulse noise. A modified cost function that considered the cost caused by the input noise was used to compensate for the bias [27]. Hereinafter, we refer to it as the BC-CIM-LGMN algorithm. Yet, estimating the input noise power might be adversely affected by impulse noise present at the output of the unidentified system. The same trick that adopted CIM as the sparse penalty constraint is applied in [23], which is referred to as BC-CIM-PNLMS hereinafter. On the other hand, an L 0 norm cost function was used to accelerate the convergence for the sparse systems. In [28], the authors combined it with an improved BC-NLMS, referred to as the BC-ZA-NLMS algorithm. Unfortunately, the BC-ZA-NLMS algorithm fails to consider the impact of the impulse noise. Researchers have proposed combining the variable step-size (VSS) method with BC-LMS to the direction of arrival (DOA) estimation problem [29].
However, few studies have comprehensively addressed all impairments, including noisy input, impulse noise in observations (measurements), and sparse unknown systems. Building upon the SQNLMMN algorithm, this paper introduces a robust bias compensation method for the sparse normalized quasi-Newton least-mean with variable mixing-norm (BC-SNQNLM-VMN) adaptive filtering algorithm. The key contributions of this research are as follows. Firstly, we introduce a normalized variant of the SQNLMN algorithm and incorporate it with the Huber function to alleviate the impact of impulse noise. Secondly, we develop a bias compensation method to counteract the influence of noisy input on the weight coefficients of the adaptive filter. Thirdly, we introduce a convex combination approach concerning the mixing parameter, enabling the utilization of the variable mixing parameter in the mixed norm approach. Consequently, our proposed method can simultaneously achieve rapid convergence and low misadjustment.
The rest of this paper is organized as follows. Section 2 describes the system model we considered in this work. Section 3 briefly reviews the SQNLMMN algorithm and outlines the proposed BC-SNQNLM-VMN adaptive filtering algorithm. Section 4 validates the effectiveness of our proposed BC-SNQNLM-VMN algorithm by using computer simulations. Conclusions and future prospects are drawn in Section 5.

2. System Models

The system with finite impulse represented by the vector w R M × 1 to be identified that considers both input noise and observation noise is depicted in Figure 1. The outputs from this system are subject to corruption by two types of noise. The observable desired signal d ( n ) can be mathematically defined as follows:
d ( n ) = y ( n ) + v ( n ) + Ω ( n ) = u T ( n ) w ( n ) + v ( n ) + Ω ( n ) ,
where · T is the transpose symbol; the weight vector w ( n ) , with components w 0 ( n ) , w 1 ( n ) , , w M 1 ( n ) , arranged in column form, represents the unknown system to be identified. w ( n ) = w 0 ( n ) , w 1 ( n ) , , w M 1 ( n ) T represents the weight vector of the unknown system; u ( n ) = u ( n ) , u ( n 1 ) , , u ( n M + 1 ) T denotes the input regressor vector. Note that the measurement noise is assumed to consist of two components: background additive Gaussian white noise (AGWN) denoted as v ( n ) and impulse noise denoted as Ω ( n ) . The AGWN noise v ( n ) has a zero mean and variance σ v 2 . This variance is related to the signal-to-noise ratio (SNR) as follows:
SNR v = 10 log 10 ( σ y 2 / σ v 2 ) ,
where σ y 2 represents the variance of y ( n ) . In addition, impulse noise is accounted for in the system model. Two conventional models are employed in this study. The first one is the Bernoulli Gaussian (BG) model [30], defined as follows:
Ω ( n ) = b ( n ) · v Ω ( n ) ,
where, b ( n ) takes the value of one with a probability of P r and zero with a probability of ( 1 P r ) . Additionally, v Ω ( n ) represents a Gaussian white process characterized by a mean of zero and a variance of σ Ω 2 . The strength of this impulse noise is S N R Ω is used to quantify its strength as follows:
SNR Ω = 10 log 10 ( σ y 2 / σ Ω 2 ) .
Another model utilized is the alpha-stable impulse noise model [13], which can be characterized by the parameter vector V = ( α s , β s , Γ s , Δ s ) . Here α s ( 0 , 2 ] represents the characteristic factor, β s [ 1 , 1 ] denotes the symmetry parameter, Γ s 0 stands for the dispersion parameter, and Δ s indicates the location parameter. A reduced α s value signifies a heightened presence of impulse noise.
Figure 1. Model of system identification with the proposed BC-SNQNLM-VMN algorithm.
Figure 1. Model of system identification with the proposed BC-SNQNLM-VMN algorithm.
Mathematics 12 01310 g001
In this paper, we consider the noisy input case, i.e., the input of the adaptive filter u ¯ ( n ) differs from that of the unknown system. We assume an AGWN input noise η ( n ) with zero-mean and variance σ η 2 is added to the original input u ( n ) , i.e., u ¯ ( n ) = u ( n ) + η ( n ) . The strength of η ( n ) is determined by the S N R η as follows:
SNR η = 10 log 10 ( σ u 2 / σ η 2 ) ,
where σ u 2 denotes the variance of u ( n ) . The weights of the adaptive filter, denoted by w ^ ( n ) , are updated iteratively through an adaptive algorithm, which computes correction terms based on y ^ ( n ) . These corrections rely on the error signal, expressed as:
e ¯ ( n ) = d ( n ) y ^ ( n ) = d ( n ) u ¯ T ( n ) w ^ ( n ) ,
where u ¯ = [ u ¯ ( n ) , u ¯ ( n 1 ) , , u ¯ ( n M + 1 ) ] T denotes the input regressor vector linked to the adaptive filter.

3. Proposed BC-SNQNLM-VMN Adaptive Filtering Algorithm

3.1. Review of SQNLMMN Algorithm [13]

The cost function of the SQNLMMN algorithm is expressed as:
J ( w ( n ) ) = δ 2 J 2 w ( n ) + 1 δ 4 J 4 w ( n ) + γ S ( w ( n ) ) ,
where J 2 E [ e 2 ( n ) ] and J 4 E [ e 4 ( n ) ] are the cost functions for least mean square (LMS) and LMF algorithms, respectively; a fixed mixing parameter 0 δ 1 is used to control the mixture of the two cost functions; S ( · ) denotes the sparsity-promoting term, which is regulated by a positive parameter γ . According to [13], we have the resulting updating recursion of the sparse quasi-Newton least-mean mixed-norm (SQNLMMN) algorithm as follows:
w ^ ( n + 1 ) = w ^ ( n ) + μ 1 P ( n ) u ( n ) e ( n ) + μ 2 P ( n ) u ( n ) e 3 ( n ) p P ( n ) g ( n ) ,
where the step size is chosen as μ 1 = δ μ and μ 2 = ( 1 δ ) μ that controls the convergence rate for LMS and LMF algorithms, respectively. Note that μ is a common step size; p P ( n ) g ( n ) denotes the sparsity penalty term, and p denotes the parameter that controls zero-attraction [13]. The matrix P ( n ) R M × M that approximates the inverse of the Hessian matrix of the cost function can be expressed as follows:
P ( n ) = B ( n ) I α γ I H ( n 1 ) B ( n ) α γ H ( n 1 ) B ( n ) ,
where B ( n ) R M × M is described as follows:
B ( n ) = 1 1 α P ( n 1 ) P T ( n 1 ) u ( n ) u T ( n ) P ( n 1 ) 1 α α 1 + α 2 e 2 ( n ) + u T ( n ) P ( n 1 ) u ( n )
with α 1 = δ α and α 2 = 3 1 δ α . Note that H ( n ) R M × M is the Hessian matrix for S ( w ^ ( n ) ) . Let S ( · ) be the L 0 norm and approximate as follows:
S w ^ ( n ) i = 0 M 1 1 e β | w ^ i ( n ) | ,
where the parameter β > 0 is used to determine the region of zero attraction [31].
The derivation of the gradient for this penalty term is as follows:
g ( n ) = t w ^ 0 ( n ) , t w ^ 1 ( n ) , , t w ^ M 1 ( n ) T ,
where
t w ^ i ( n ) = β sgn w ^ i ( n ) e β | w ^ i ( n ) | , 0 i ( M 1 ) ,
and the operator sgn ( · ) denotes the sign function. In order to streamline Equation (13), we utilize the first-order Taylor approximation of the exponential function in the following manner:
e β | u | 1 β | u | , | u | β 1 0 , otherwise .
Therefore, we can approximate Equation (13) as follows:
t ( w ^ i ( n ) ) = β ( 1 + β w ^ i ( n ) ) , β 1 w ^ i ( n ) < 0 β ( 1 β w ^ i ( n ) ) , 0 < w ^ i ( n ) β 1 0 , otherwise .
The Hessian of Equation (11) can be derived as:
H ( n ) = diag t w ^ 0 ( n ) , t w ^ 1 ( n ) , , t w ^ M 1 ( n )
with
t ( w ^ i ( n ) ) = β 2 , | w ^ i ( n ) | β 1 0 , otherwise .

3.2. Normalized SQNLMMN

Inspired by the design of normalized LMS and normalized LMF [32], we propose a normalized version of SQNNLMMN algorithm by modifying Equation (8) that considers the noisy inputs u ¯ ( n ) as follows:
w ^ ( n + 1 ) = w ^ ( n ) + μ 1 P ¯ ( n ) u ¯ ( n ) e ¯ ( n ) u ¯ ( n ) 2 + μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) p P ¯ ( n ) g ( n )
where u ¯ ( n ) = u ( n ) + η ( n ) denotes the noisy input regressor vector and η ( n ) = [ η ( n ) , η ( n 1 ) , . . . , η ( n M + 1 ) ] T represents the input noise vector. The noisy error signal e ¯ ( n ) is calculated as follows:
e ¯ ( n ) = d ( n ) u ¯ T ( n ) w ^ ( n ) = e ( n ) η T ( n ) w ^ ( n ) .
Note that the matrix P ¯ ( n ) is a contaminated version of P ( n ) (see Equation (9)) defined as follows:
P ¯ ( n ) = B ¯ ( n ) B ¯ ( n ) α γ ( I H ( n 1 ) B ¯ ( n ) α γ ) H ( n 1 ) B ¯ ( n ) ,
where γ = p / μ > 0 governs the impact of the penalty term; 0 < α 0.1 denotes a forgetting factor [13]; the matrix B ¯ ( n ) is a contaminated version of B ( n ) (see Equation (10)) defined as follows:
B ¯ ( n ) = 1 1 α P ¯ ( n 1 ) P ¯ T ( n 1 ) u ¯ ( n ) u ¯ T ( n ) P ¯ ( n 1 ) 1 α α 1 + α 2 e ¯ 2 ( n ) + u ¯ T ( n ) P ¯ ( n 1 ) u ¯ ( n ) .
Note that the difference between e ¯ ( n ) and e ( n ) , i.e., η T ( n ) w ^ ( n ) , results in the biases during the weight updating process.

3.3. Bias Compensation Design

To compensate for the bias of the normalized SQNLMMN algorithm, we introduce a bias compensation vector b ( n ) R M × 1 into the weight-updating recursion and rewrite Equation (18) as follows:
w ^ ( n + 1 ) = w ^ ( n ) + Δ w ^ ( n ) + b ( n ) ,
with
Δ w ^ ( n ) = μ 1 P ¯ ( n ) u ¯ ( n ) e ¯ ( n ) u ¯ ( n ) 2 + μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) p P ¯ ( n ) g ( n ) .
We further define the weight estimation error vector as follows:
w ˜ ( n ) = w ^ ( n ) w ( n ) .
By combining Equations (22) and (24), we then have the following recursion
w ˜ ( n + 1 ) = w ˜ ( n ) + Δ w ^ ( n ) + b ( n ) .
It has been reported that the sparsity terms in Equation (23), i.e., p P ¯ ( n ) g ( n ) , should be ignored when deriving the bias compensation term b ( n ) ; otherwise the derived vector b ( n ) will compensate for the bias caused by both the input noise and this term [28]. Hence, the recursion for weight updating can be formulated as follows:
w ˜ ( n + 1 ) = w ˜ ( n ) + μ 1 P ¯ ( n ) u ¯ ( n ) e ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) + μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) + b ( n ) .
Given the noisy input vector u ¯ ( n ) , we then derive b ( n ) based on the unbiased criterion as follows:
E w ˜ ( n + 1 ) | u ¯ ( n ) = 0 whenever E w ˜ ( n ) | u ¯ ( n ) = 0 .
Furthermore, to simplify the analysis, two commonly used assumptions have been employed [33] as follows:
Assumption 1.
The input noise η ( n ) and background noise are zero-mean AGWN noises and the ratio ρ = σ v 2 / σ η 2 is a prior knowledge.
Assumption 2.
The signals η ( n ) , v ( n ) , Ω ( n ) , u ( n ) , and w ˜ ( n ) are statistically independent.
By taking expectation on both sides of Equation (26) for the given u ¯ ( n ) and assuming E w ˜ ( n ) | u ¯ ( n ) = 0 , we have
E w ˜ ( n + 1 ) | u ¯ ( n ) = E μ 1 P ¯ ( n ) u ¯ ( n ) e ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) + E μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) u ¯ ( n ) + E b ( n ) | u ¯ ( n ) .
By replacing u ¯ ( n ) and e ¯ ( n ) in the first term of the right-hand side (RHS) of Equation (28), we have
E μ 1 P ¯ ( n ) u ¯ ( n ) e ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) = E μ 1 P ¯ ( n ) u ( n ) e ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) E μ 1 P ¯ ( n ) η ( n ) η T ( n ) w ^ ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) ,
where
E μ 1 P ¯ ( n ) u ( n ) e ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) = μ 1 E P ¯ ( n ) | u ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) E u ( n ) u T ( n ) w ˜ ( n ) + v ( n ) u ¯ ( n ) = μ 1 E P ¯ ( n ) | u ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) E u ( n ) u T ( n ) | u ¯ ( n ) E w ˜ ( n ) | u ¯ ( n ) + E u ( n ) v ( n ) | u ¯ ( n ) = 0
and
E μ 1 P ¯ ( n ) η ( n ) η T ( n ) w ^ ( n ) u ¯ T ( n ) u ¯ ( n ) u ¯ ( n ) = μ 1 E P ¯ ( n ) | u ¯ ( n ) u ¯ T ( n ) u ¯ ( n ) E η ( n ) η T ( n ) E w ^ ( n ) | u ¯ ( n ) = μ 1 E P ¯ ( n ) | u ¯ ( n ) σ η 2 u ¯ T ( n ) u ¯ ( n ) E w ^ ( n ) | u ¯ ( n ) .
Note that as the condition u ¯ ( n ) 2 e ¯ 2 ( n ) and the deviation of u ¯ ( n ) 2 being small hold, the the second term of the RHS of Equation (28) can be approximated as follows [33]:
E μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) u ¯ ( n ) E μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) | u ¯ ( n ) E u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) | u ¯ ( n ) .
Thus, we can rewrite the nominator of Equation (32) as follows:
E μ 2 P ¯ ( n ) u ¯ ( n ) e ¯ 3 ( n ) | u ¯ ( n ) = E μ 2 P ¯ ( n ) u ¯ ( n ) e ( n ) η T ( n ) w ^ ( n ) 3 u ¯ ( n ) = E μ 2 P ¯ ( n ) u ¯ ( n ) e 3 ( n ) | u ¯ ( n ) + E μ 2 P ¯ ( n ) u ¯ ( n ) 3 e ( n ) η T ( n ) w ^ ( n ) 2 u ¯ ( n ) E μ 2 P ¯ ( n ) u ¯ ( n ) η T ( n ) w ^ ( n ) 3 u ¯ ( n ) = σ η 4 E μ 2 P ¯ ( n ) w ^ ( n ) w ^ T ( n ) w ^ ( n ) u ¯ ( n ) .
Furthermore, we can rewrite the denominator of the RHS of Equation (32) as follows:
E u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ 2 ( n ) ) u ¯ ( n ) = E u ¯ ( n ) 2 u ¯ ( n ) 2 + e 2 ( n ) 2 e ( n ) η T ( n ) w ^ ( n ) + η T ( n ) w ^ ( n ) 2 u ¯ ( n ) = u ¯ ( n ) 2 u ¯ ( n ) 2 + E e 2 ( n ) | u ¯ ( n ) + E η T ( n ) w ^ ( n ) 2 u ¯ ( n ) ,
where
E e 2 ( n ) | u ¯ ( n ) = E u T ( n ) w ˜ ( n ) 2 2 u T ( n ) w ˜ ( n ) v ( n ) + v 2 ( n ) u ¯ ( n ) = σ v 2
and
E η T ( n ) w ^ ( n ) 2 u ¯ ( n ) = E w ^ T ( n ) η ( n ) η T ( n ) w ^ ( n ) | u ¯ ( n ) = σ η 2 E w ^ T ( n ) w ^ ( n ) | u ¯ ( n ) .
Combining the results Equations (29) to (36) and substituting them into Equation (28), we obtain the following results:
E b ( n ) | u ¯ ( n ) = μ 1 E P ¯ ( n ) u ¯ ( n ) ] σ η 2 u ¯ T ( n ) u ¯ ( n ) E w ^ ( n ) | u ¯ ( n ) + μ 2 σ η 4 E P ¯ ( n ) w ^ ( n ) w ^ T ( n ) w ^ ( n ) | u ¯ ( n ) u ¯ ( n ) 2 u ¯ ( n ) 2 + σ η 2 E w ^ T ( n ) w ^ ( n ) | u ¯ ( n ) .
By using the stochastic approximation [34], we derive the bias-compensation vector as follows:
b ( n ) = μ 1 E P ¯ ( n ) | u ¯ ( n ) σ η 2 w ^ ( n ) u ¯ T ( n ) u ¯ ( n ) + μ 2 E P ¯ ( n ) | u ¯ ( n ) σ η 4 w ^ ( n ) w ^ T ( n ) w ^ ( n ) u ¯ ( n ) 2 u ¯ ( n ) 2 + σ v 2 + σ η 2 w ^ T ( n ) w ^ ( n )
with
E P ¯ ( n ) | u ¯ ( n ) : = P b i a s ( n ) = B b i a s ( n ) I α γ I H ( n 1 ) B b i a s ( n ) α γ H ( n 1 ) B b i a s ( n ) ,
where
B b i a s ( n ) : = E B ¯ ( n ) | u ¯ ( n ) = 1 1 α P b i a s ( n 1 ) P b i a s T ( n 1 ) u ¯ ( n ) u ¯ T ( n ) P b i a s ( n 1 ) 1 α α 1 + α 2 σ v 2 + σ η 2 w ^ T ( n ) w ^ ( n ) + u ¯ T ( n ) P b i a s ( n 1 ) u ¯ ( n ) .

3.4. Variable Mixing Parameter Design

For the conventional SQNLMMN algorithm, it was suggested to use a fixed mixing parameter δ = 0.8 to achieve the best performance in terms of the convergence rate. However, a small mixing parameter, say δ = 0.2 , could slowly achieve a lower misadjustment in the steady state than that with a large δ . This inspires us to use a variable mixing parameter to attain a fast convergence rate and small misadjustment simultaneously.
Figure 2 depicts the block diagram of the variable mixing parameter scheme design. Two adaptive filters are combined as follows:
w ^ ( n + 1 ) = w ^ 1 ( n + 1 ) λ C ( n + 1 ) + w ^ 2 ( n + 1 ) 1 λ C ( n + 1 ) ,
where w ^ 1 ( n ) and w ^ 2 ( n ) are the fast and slow filters, respectively, i.e., δ 1 > δ 2 ; λ ( n + 1 ) is the smoothed combination factor. Referring to Equation (18), the adaptation recursion for each filter can be expressed as follows:
w ^ i ( n + 1 ) = w ^ i ( n ) + Δ w ^ i ( n ) + κ b i ( n ) ,
with
Δ w ^ i ( n ) = μ 1 , i P ¯ i ( n ) u ¯ ( n ) e ¯ i ( n ) u ¯ ( n ) 2 + μ 2 , i P ¯ i ( n ) u ¯ ( n ) e ¯ i 3 ( n ) u ¯ ( n ) 2 ( u ¯ ( n ) 2 + e ¯ i 2 ( n ) ) p P ¯ i ( n ) g i ( n ) ,
where g i ( n ) can be calculated by Equation (12) for w ^ i ( n ) , and
P ¯ i ( n ) = B ¯ i ( n ) I α γ I H i ( n 1 ) B ¯ i ( n ) α γ ) H i ( n 1 ) B ¯ i ( n ) ,
where H i ( n 1 ) can be calculated by Equation (16) for w ^ i ( n 1 ) and the scaling factor 0 κ < 1 in Equation (42) is used to mitigate the interaction between Δ w ^ i ( n ) and b i ( n ) ; μ 1 , i = μ δ i and μ 2 , i = μ ( 1 δ i ) ; the matrix B ¯ ( n ) is a contaminated version of B ( n ) (see Equation (10)) defined as follows:
B ¯ i ( n ) = 1 1 α P ¯ i ( n 1 ) P ¯ i T ( n 1 ) u ¯ ( n ) u ¯ T ( n ) P ¯ i ( n 1 ) 1 α α 1 , i + α 2 , i e ¯ i 2 ( n ) + u ¯ T ( n ) P ¯ i ( n 1 ) u ¯ ( n ) ,
where α 1 , i = δ i α and α 2 , i = 3 ( 1 δ i ) α . The bias compensation vector b i ( n ) associated with w ^ i ( n ) can be expressed as follows:
b i ( n ) = μ 1 , i E P ¯ i ( n ) | u ¯ ( n ) σ η 2 w ^ i ( n ) u ¯ T ( n ) u ¯ ( n ) + μ 2 , i E P ¯ i ( n ) | u ¯ ( n ) σ η 4 w ^ i ( n ) w ^ i T ( n ) w ^ i ( n ) u ¯ ( n ) 2 u ¯ ( n ) 2 + σ v 2 + σ η 2 w ^ i T ( n ) w ^ i ( n )
with
E P ¯ i ( n ) | u ¯ ( n ) : = P b i a s , i ( n ) = B b i a s , i ( n ) B b i a s , i ( n ) α γ I H i ( n 1 ) B b i a s , i ( n ) α γ H i ( n 1 ) B b i a s , i ( n ) ,
where
B b i a s , i ( n ) = 1 1 α P b i a s , i ( n 1 ) P b i a s , i T ( n 1 ) u ¯ ( n ) u ¯ T ( n ) P b i a s , i ( n 1 ) 1 α α 1 , i + α 2 , i σ v 2 + σ η 2 w ^ i T ( n ) w ^ i ( n ) + u ¯ T ( n ) P b i a s , i ( n 1 ) u ¯ ( n ) .
The smoothed combination equation for λ C is given by
λ C ( n + 1 ) = 1 C k = n C + 2 n + 1 λ ( k ) ,
where C is the length used to smooth the combination factor λ , which can be calculated as follows [35]:
λ ( n + 1 ) = sgm a ( n + 1 ) = 1 1 + e a ( n + 1 )
with
a ( n + 1 ) = a ( n ) + μ a sgn ( e ¯ ( n ) ) y ^ 1 ( n ) y ^ 2 ( n ) λ ( n ) 1 λ ( n ) ,
where μ a is the step size for adjusting the recursion of a ( n ) ; sgm · and sgn · denote the sigmoid and sign function, respectively. Note that we confine | a ( n + 1 ) | a + , and we check if the condition holds every N 0 iterations. We force λ ( n + 1 ) = 0 when a ( n + 1 ) a + and set λ ( n + 1 ) = 1 when a ( n + 1 ) a + [36].

3.5. Robustness Consideration

To obtain the impact of impulse noise Ω ( n ) on the convergence of the adaptive filter w ^ i ( n ) in the proposed BC-SNQNLM-VMN algorithm, we propose applying the modified Huber function ψ (·) on e ¯ i ( n ) as follows [37]:
ψ ( e ¯ i ( n ) ) = e ¯ i ( n ) , for ξ i e ¯ i ( n ) ξ i 0 , otherwise ,
where ξ i is a threshold as follows:
ξ i = k ξ σ ^ e ¯ i ( n ) ,
with k ξ = 2.576 .
σ ^ e ¯ i ( n ) = λ σ σ ^ e ¯ i ( n 1 ) + c 1 ( 1 λ σ ) + med ( A e ¯ i ( n ) ) ,
where med · denotes the median operation; A e ¯ i ( n ) is an observation vector for e ¯ i 2 ( n ) with length N w defined as follows:
A e ¯ i ( n ) = e ¯ i 2 , , e ¯ i 2 ( n N w + 1 ) .
Note that we choose c 1 = 2.13 , N w = 9 , and λ σ = 0.99 in the computer simulations.
Furthermore, the estimation of σ η 2 is required to calculate the bias compensation vector b i ( n ) . Concerning robustness, it has been reported that σ η 2 can be estimated as follows [22,24]:
σ ^ η , i 2 = e ¯ i 2 ( n ) / ( w i ( n ) 2 2 + ρ ) , if e ¯ i 2 2 w i ( n ) 2 2 u ¯ ( n ) 2 2 σ ^ η , i 2 , otherwise
where ρ , which is the ratio of σ v 2 and σ η 2 , is assumed to be available as in [38].

3.6. Computational Cost Analysis

Table 1 lists the major computational cost for the proposed BC-SNQNLM-VMN algorithm in terms of the required number of adders (Adds) and multipliers (Muls). Moreover, Table 2 compares the dominating computational costs with other comparative works. Note that we focus on the dominating terms of the total number of required adders and multipliers for simplicity consideration. Due to the high computational costs caused by the original SQNLMMN algorithm, the proposed method incurs much higher computational costs than comparative works.

4. Simulation Results

4.1. Setup

Computer simulations evaluated the effectiveness of the proposed algorithm. The unknown sparse system w comprises 32 taps (M = 32), which has K = 8 nonzero taps, i.e., the sparsity is 0.75 . We randomly choose the positions of the nonzero taps among M taps, and their values follow a standard Gaussian distribution. A standard AGWN models the input signal. The signal-to-noise ratio (SNR) for the input signal SNR η = 10 dB (see Equation (5)) and the SNR of the observed signal SNR v = 30 dB (see Equation (2)). For the BG impulse noise model, we choose the SNR of the additive impulse noise SNR Ω = 30 dB (see Equation (4)). Moreover, we designate the occurrence probability of BG impulse as P r = 10 3 for weak BG noise and P r = 6 × 10 2 for strong BG noise. For the alpha-stable impulse noise model, we define V = ( 1.8 , 0 , 0.1 , 0 ) for weak alpha stable noise and V = ( 1.5 , 0 , 0.1 , 0 ) for strong alpha stable noise [11]. Other main parameters are setting as follows: γ = 2 × 10 6 , μ = 0.5 , p = 10 6 , κ = 0.8 , β = 5 , a + = 8 , μ a = 5 , N 0 = 2 , and α = 0.01 .
The fast filter w ^ 1 ( n ) with δ 1 = 0.8 and the slow filter w ^ 2 ( n ) with δ 2 = 0.2 were used to combine the filter w ^ ( n ) (see Equation (41)). We used a vector with length C to store the C consecutive values of instantaneous λ ( k ) (see Equation (49)). The initial value of each element in this vector was 1. This makes the value of λ C lean to 1 during the adjustment phase, i.e., the combined filter behaves like the fast filter. The performance metric is the normalized mean-square deviation (NMSD), which can be calculated as follows:
NMSD ( n ) = 10 log 10 | | w w ^ ( n ) | | 2 | | w | | 2 .
We employed MATLAB® R2022a installed on a Windows 10 64-bit operating system to conduct simulations on a computer equipped with an Intel® Core™ i7-13700K CPU from Santa Clara, CA, United States, and 128GB DDR4 RAM. We plotted the NMSD learning curves and the evolution of mixing parameters in the simulation results by averaging over 100 independent Monte Carlo trials. The comparative works were BC-NLMS [22], BC-NLMF [24], BC-LMMN [26], BC-CIM-LGMN [27], and BC-CIM-PNLMS [23] algorithms.

4.2. Results

4.2.1. Baseline: No Impulse Noise

We compare the NMSD learning curves concerning various bias-compensated adaptive filtering algorithms in the absence of impulse noise. As shown in Figure 3, our proposed method can achieve the lowest NMSD during the steady state, outperforming the comparative works by 2.5–3.5 dB. Note that C = 1 (see Equation (49)) implies no smoothing was applied. The results confirmed that combining two filters with a smoothed factor (see Equation (41)) exhibits a more smooth convergence behavior in the transient stage ( 0.5 × 10 4 n 0.9 × 10 4 ) during the simulation. Note that: (1) the step sizes are chosen so that the convergence rate is at the same level for all algorithms to have a fair comparison; (2) the NMSD loss is about 12 dB without bias compensation.

4.2.2. Evaluation of Variable Mixing Parameter Method

As shown in Figure 4, we evaluate the effectiveness of the proposed variable mixing parameter scheme. In Figure 4a,b, the additive impulse noise corresponds to the weak and strong BG impulse noise, respectively. The results exhibit that our proposed combining mixing parameter δ scheme worked well in both types of additive impulse noise. Referring to Figure 4a, we can observe the variation of the mixing parameter in the weak BG impulse noise case as follows:
  • First, λ C keeps at 1 in the early stage ( n [ 0 , 0.5 × 10 4 ] ), which implies the proposed method behaves like δ = 0.8 .
  • Then, λ C decreases gradually during the transient stage ( n ( 0.5 × 10 4 , 1.25 × 10 4 ] ), which implies the proposed method behaves changing from δ = 0.8 to δ = 0.2 .
  • Finally, λ C keeps around 0 in the steady-state stage ( n ( 1.25 × 10 4 , 2.5 × 10 4 ] ).
Figure 4. NMSD learning curves (left) and evolution of λ C (right) for BG impulse noise mode: (a) weak ( P r = 0.001) and (b) strong ( P r = 0.06).
Figure 4. NMSD learning curves (left) and evolution of λ C (right) for BG impulse noise mode: (a) weak ( P r = 0.001) and (b) strong ( P r = 0.06).
Mathematics 12 01310 g004aMathematics 12 01310 g004b
Note that because the initial values for the vector with length C used to calculate λ C are all set to 1, we observed a slight decrease followed by an increase in λ C at the beginning of the adaptation process. Similar results were observed in the strong BG impulse noise case (see Figure 4b).
In Figure 5, we evaluated the impact of the additive alpha-stable impulse noise in both weak (Figure 5a) and strong (Figure 5b) cases. In this scenario, we observe that the strong alpha stable impulse noise makes the NMSD learning curves exhibit more fluctuations in the steady state. In addition, we observe that the evolution of λ shows more fluctuations at the beginning of the transient stage and quickly reaches its steady state. This implies the proposed method behaves by changing from δ = 0.8 to δ = 0.2 earlier than in the case of weak impulse noise. Compared with the baseline, the smoothed combination method exhibits fewer fluctuations, especially in the strong impulse noise cases. Thus, we choose C = 10 3 without an explicit statement in the following simulation. The results have shown the impact of the noisy input on the resulting NMSD. Without bias compensation, the NMSD loss in the steady status is about 12 dB, the same as the baseline. Therefore, we can confirm the robustness of our proposed method.

4.2.3. Performance Comparisons in the Presence of Impulse Noise

As shown in Figure 6, we compare our proposed method with other comparative works for the BG impulse noise case. In the weak BG impulse noise case (see Figure 6a), our proposed method achieves the lowest NMSD and improves by 3 dB to 15 dB compared to comparative works. Note that the BC-CIM-PNLMS did not consider the impact of impulse noise, which resulted in the worst NMSD performance. However, BC-CIM-LGMN, BC-LMMN, and BC-NLMF algorithms diverged in the strong BG impulse noise case (see Figure 6b). In this case, only the BC-NLMS and our proposed method still function well, and our method improves the NMSD by 3 dB in the steady state.
As shown in Figure 7, we compare our proposed method with other comparative works for the alpha stable impulse noise case. In the weak alpha stable impulse noise case (see Figure 7a), our proposed method achieves the lowest NMSD and improves by 4.5 dB to 7 dB compared to comparative works. Note that the BC-CIM-PNLMS did not consider the impact of impulse noise, exhibiting some NMSD learning curve spikes. In addition, the comparative works exhibit poor performance in the strong alpha stable impulse noise case (see Figure 7b). In this case, the BC-CIM-PNLMS exhibits stronger spikes than that in the weak alpha stable impulse noise case. However, our proposed method shows the lowest NMSD loss (about 0.9 dB) compared to other comparative works in the steady state.

5. Conclusions and Future Prospects

The noisy input signals result in a significant NMSD loss even in the absence of an impulse noise scenario. In this paper, we have presented a robust bias compensation method for the SNQNLM algorithm. Furthermore, we have proposed a variable mixing-norm method to attain a high convergence rate and low misadjustment during adaptation. Simulation results have confirmed that our proposed BC-SNQNLM-VMN algorithm outperforms the comparative works by 3 to 15 dB for BG impulse noise and 4.5 to 7 dB for alpha stable impulse noise in terms of NMSD, respectively. Additionally, we illuminate potential pathways for overcoming remaining challenges and broadening the applicability of our methodologies:
  • The interaction between weight-vector correction term Δ w ^ i ( n ) and bias compensation term κ b i ( n ) (see Equation (42)): We have employed a constant scaling factor, κ , to mitigate the interaction between the weight-vector correction term, Δ w ^ i ( n ) , and the bias compensation term, κ b i ( n ) . Nonetheless, devising a dynamic algorithm for adapting κ would enhance the robustness of bias compensation methods to varying input noise over time or in scenarios involving time-varying unknown systems.
  • Extension to general mixed-norm algorithms: While our study focused on L 2 and L 4 norms, the methodology can be extended to encompass bias compensation in adaptive filtering algorithms utilizing a mix of L p and L q norms, where p and q are positive parameters.
  • Bias compensation for non-linear adaptive filtering systems: Addressing bias compensation becomes notably intricate in non-linear adaptive filtering systems. A future avenue of research involves developing techniques to estimate biases induced by noisy inputs in such non-linear contexts.

Author Contributions

Conceptualization, Y.-R.C., H.-E.H. and G.Q.; methodology, Y.-R.C. and H.-E.H.; software, Y.-R.C. and H.-E.H.; validation, Y.-R.C. and G.Q.; formal analysis, Y.-R.C. and H.-E.H.; investigation, H.-E.H.; resources, Y.-R.C. and H.-E.H.; data curation, Y.-R.C. and H.-E.H.; supervision, Y.-R.C.; writing—original draft preparation, Y.-R.C. and H.-E.H.; writing—review and editing, Y.-R.C., H.-E.H. and G.Q.; project administration, Y.-R.C.; funding acquisition, Y.-R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science and Technology Council, Taiwan (NSTC) under Grant 112-2221-E-197-022.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank all the reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Diniz, P.S.R. Adaptive Filtering Algorithms and Practical Implementation, 5th ed.; Springer: New York, NY, USA, 2020. [Google Scholar]
  2. Tan, G.; Yan, S.; Yang, B. Impulsive Noise Suppression and Concatenated Code for OFDM Underwater Acoustic Communications. In Proceedings of the 2022 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xi’an, China, 25–27 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  3. Wang, J.; Li, J.; Yan, S.; Shi, W.; Yang, X.; Guo, Y.; Gulliver, T.A. A Novel Underwater Acoustic Signal Denoising Algorithm for Gaussian/Non-Gaussian Impulsive Noise. IEEE Trans. Veh. Technol. 2021, 70, 429–445. [Google Scholar] [CrossRef]
  4. Diamant, R. Robust Interference Cancellation of Chirp and CW Signals for Underwater Acoustics Applications. IEEE Access 2018, 6, 4405–4415. [Google Scholar] [CrossRef]
  5. Ge, F.X.; Zhang, Y.; Li, Z.; Zhang, R. Adaptive Bubble Pulse Cancellation From Underwater Explosive Charge Acoustic Signals. IEEE J. Oceanic Eng. 2011, 36, 447–453. [Google Scholar] [CrossRef]
  6. Fieve, S.; Portala, P.; Bertel, L. A new VLF/LF atmospheric noise model. Radio Sci. 2007, 42, 1–14. [Google Scholar] [CrossRef]
  7. Rehman, I.u.; Raza, H.; Razzaq, N.; Frnda, J.; Zaidi, T.; Abbasi, W.; Anwar, M.S. A Computationally Efficient Distributed Framework for a State Space Adaptive Filter for the Removal of PLI from Cardiac Signals. Mathematics 2023, 11, 350. [Google Scholar] [CrossRef]
  8. Peng, L.; Zang, G.; Gao, Y.; Sha, N.; Xi, C. LMS Adaptive Interference Cancellation in Physical Layer Security Communication System Based on Artificial Interference. In Proceedings of the 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 25–27 May 2018; pp. 1178–1182. [Google Scholar] [CrossRef]
  9. Chen, Y.E.; Chien, Y.R.; Tsao, H.W. Chirp-Like Jamming Mitigation for GPS Receivers Using Wavelet-Packet-Transform-Assisted Adaptive Filters. In Proceedings of the 2016 International Computer Symposium (ICS), Chiayi, Taiwan, 15–17 December 2016; pp. 458–461. [Google Scholar] [CrossRef]
  10. Yu, H.C.; Chien, Y.R.; Tsao, H.W. A study of impulsive noise immunity for wavelet-OFDM-based power line communications. In Proceedings of the 2016 International Conference On Communication Problem-Solving (ICCP), Taipei, Taiwan, 7–9 September 2016; pp. 1–2. [Google Scholar] [CrossRef]
  11. Chien, Y.R.; Wu, S.T.; Tsao, H.W.; Diniz, P.S.R. Correntropy-Based Data Selective Adaptive Filtering. IEEE Trans. Circuits Syst. I Regul. Pap. 2024, 71, 754–766. [Google Scholar] [CrossRef]
  12. Chien, Y.R.; Xu, S.S.D.; Ho, D.Y. Combined boosted variable step-size affine projection sign algorithm for environments with impulsive noise. Digit. Signal Process. 2023, 140, 104110. [Google Scholar] [CrossRef]
  13. Maleki, N.; Azghani, M. Sparse Mixed Norm Adaptive Filtering Technique. Circuits Syst. Signal Process. 2020, 39, 5758–5775. [Google Scholar] [CrossRef]
  14. Shao, M.; Nikias, C. Signal processing with fractional lower order moments: Stable processes and their applications. Proc. IEEE 1993, 81, 986–1010. [Google Scholar] [CrossRef]
  15. Huang, F.; Song, F.; Zhang, S.; So, H.C.; Yang, J. Robust Bias-Compensated LMS Algorithm: Design, Performance Analysis and Applications. IEEE Trans. Veh. Technol. 2023, 72, 13214–13228. [Google Scholar] [CrossRef]
  16. Danaee, A.; Lamare, R.C.d.; Nascimento, V.H. Distributed Quantization-Aware RLS Learning With Bias Compensation and Coarsely Quantized Signals. IEEE Trans. Signal Process. 2022, 70, 3441–3455. [Google Scholar] [CrossRef]
  17. Walach, E.; Widrow, B. The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inform. Theory 1984, 30, 275–283. [Google Scholar] [CrossRef]
  18. Balasundar, C.; Sundarabalan, C.K.; Santhanam, S.N.; Sharma, J.; Guerrero, J.M. Mixed Step Size Normalized Least Mean Fourth Algorithm of DSTATCOM Integrated Electric Vehicle Charging Station. IEEE Trans. Ind. Inform. 2023, 19, 7583–7591. [Google Scholar] [CrossRef]
  19. Papoulis, E.; Stathaki, T. A normalized robust mixed-norm adaptive algorithm for system identification. IEEE Signal Process. Lett. 2004, 11, 56–59. [Google Scholar] [CrossRef]
  20. Zerguine, A. A variable-parameter normalized mixed-norm (VPNMN) adaptive algorithm. Eurasip J. Adv. Signal Process. 2012, 2012, 55. [Google Scholar] [CrossRef]
  21. Pimenta, R.M.S.; Petraglia, M.R.; Haddad, D.B. Stability analysis of the bias compensated LMS algorithm. Digital Signal Process. 2024, 147, 104395. [Google Scholar] [CrossRef]
  22. Jung, S.M.; Park, P. Normalised least-mean-square algorithm for adaptive filtering of impulsive measurement noises and noisy inputs. Electron. Lett. 2013, 49, 1270–1272. [Google Scholar] [CrossRef]
  23. Jin, Z.; Guo, L.; Li, Y. The Bias-Compensated Proportionate NLMS Algorithm With Sparse Penalty Constraint. IEEE Access 2020, 8, 4954–4962. [Google Scholar] [CrossRef]
  24. Lee, M.; Park, T.; Park, P. Bias-Compensated Normalized Least Mean Fourth Algorithm for Adaptive Filtering of Impulsive Measurement Noises and Noisy Inputs. In Proceedings of the 2019 12th Asian Control Conference (ASCC), Kitakyushu, Japan, 9–12 June 2019; pp. 220–223. [Google Scholar]
  25. Vasundhara. Robust filtering employing bias compensated M-estimate affine-projection-like algorithm. Electron. Lett. 2020, 56, 241–243. [Google Scholar] [CrossRef]
  26. Lee, M.; Park, I.S.; Park, C.e.; Lee, H.; Park, P. Bias Compensated Least Mean Mixed-norm Adaptive Filtering Algorithm Robust to Impulsive Noises. In Proceedings of the 2020 20th International Conference on Control, Automation and Systems (ICCAS), Busan, Republic of Korea, 13–16 October 2020; pp. 652–657. [Google Scholar] [CrossRef]
  27. Ma, W.; Zheng, D.; Zhang, Z. Bias-compensated generalized mixed norm algorithm via CIM constraints for sparse system identification with noisy input. In Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, 26–28 July 2017; pp. 5094–5099. [Google Scholar] [CrossRef]
  28. Yoo, J.; Shin, J.; Park, P. An Improved NLMS Algorithm in Sparse Systems Against Noisy Input Signals. IEEE Trans. Circuits Syst. Express Briefs 2015, 62, 271–275. [Google Scholar] [CrossRef]
  29. Liu, C.; Zhao, H. Efficient DOA Estimation Method Using Bias-Compensated Adaptive Filtering. IEEE Trans. Veh. Technol. 2020, 69, 13087–13097. [Google Scholar] [CrossRef]
  30. Kim, S.R.; Efron, A. Adaptive robust impulse noise filtering. IEEE Transa. Signal Process. 1995, 43, 1855–1866. [Google Scholar] [CrossRef]
  31. Gu, Y.; Jin, J.; Mei, S. l0 Norm Constraint LMS Algorithm for Sparse System Identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar] [CrossRef]
  32. Eweda, E. Global Stabilization of the Least Mean Fourth Algorithm. IEEE Trans. Signal Process. 2012, 60, 1473–1477. [Google Scholar] [CrossRef]
  33. Zheng, Z.; Liu, Z.; Zhao, H. Bias-Compensated Normalized Least-Mean Fourth Algorithm for Noisy Input. Circuits Syst. Signal Process. 2017, 36, 3864–3873. [Google Scholar] [CrossRef]
  34. Zhao, H.; Zheng, Z. Bias-compensated affine-projection-like algorithms with noisy input. Electron. Lett. 2016, 52, 712–714. [Google Scholar] [CrossRef]
  35. Arenas-Garcia, J.; Figueiras-Vidal, A.; Sayed, A. Mean-square performance of a convex combination of two adaptive filters. IEEE Trans. Signal Process. 2006, 54, 1078–1090. [Google Scholar] [CrossRef]
  36. Lu, L.; Zhao, H.; Li, K.; Chen, B. A Novel Normalized Sign Algorithm for System Identification Under Impulsive Noise Interference. Circuits Syst. Signal Process. 2015, 35, 3244–3265. [Google Scholar] [CrossRef]
  37. Zhou, Y.; Chan, S.C.; Ho, K.L. New Sequential Partial-Update Least Mean M-Estimate Algorithms for Robust Adaptive System Identification in Impulsive Noise. IEEE Trans. Ind. Electron. 2011, 58, 4455–4470. [Google Scholar] [CrossRef]
  38. Jung, S.M.; Park, P. Stabilization of a Bias-Compensated Normalized Least-Mean-Square Algorithm for Noisy Inputs. IEEE Trans. Signal Process. 2017, 65, 2949–2961. [Google Scholar] [CrossRef]
Figure 2. The design of variable mixing parameter scheme.
Figure 2. The design of variable mixing parameter scheme.
Mathematics 12 01310 g002
Figure 3. Comparisons of NMSD learning curve without impulse noise.
Figure 3. Comparisons of NMSD learning curve without impulse noise.
Mathematics 12 01310 g003
Figure 5. NMSD learning curves (left) and evolution of λ C (right) for alpha stable impulse noise mode: (a) weak ( α s = 1.8 ) and (b) strong ( α s = 1.5 ).
Figure 5. NMSD learning curves (left) and evolution of λ C (right) for alpha stable impulse noise mode: (a) weak ( α s = 1.8 ) and (b) strong ( α s = 1.5 ).
Mathematics 12 01310 g005aMathematics 12 01310 g005b
Figure 6. NMSD learning curve under BG impulse noise: (a) weak ( P r = 0.001 ) and (b) strong ( P r = 0.06 ).
Figure 6. NMSD learning curve under BG impulse noise: (a) weak ( P r = 0.001 ) and (b) strong ( P r = 0.06 ).
Mathematics 12 01310 g006
Figure 7. NMSD learning curve of AGWN input under alpha stable impulse noise: (a) weak ( α s = 1.8 ) and (b) strong ( α s = 1.5 ).
Figure 7. NMSD learning curve of AGWN input under alpha stable impulse noise: (a) weak ( α s = 1.8 ) and (b) strong ( α s = 1.5 ).
Mathematics 12 01310 g007
Table 1. Computational analysis for the proposed method.
Table 1. Computational analysis for the proposed method.
No.OperationsAddsMuls
1 y ^ i ( n ) = w ^ i T ( n ) u ¯ i ( n ) M 1 M
2 e ¯ i ( n ) = d ( n ) y ^ i ( n ) 1-
3 B ¯ i ( n ) in Equation (45) M 3 + M 2 M + 2 M 3 + 4 M 2 + M + 7
4 P ¯ i ( n ) in Equation (44) 2 M 3 2 M 3 + 4 M 2
5 Δ w ^ i ( n ) in Equation (43) 3 M 2 3 M 2 + 4 M + 7
6 P b i a s , i ( n ) in Equation (47) 2 M 3 M 2 2 M 3 + 4 M 2 + 1
7 B b i a s , i ( n ) in Equation (48) M 3 + 2 M 2 + 3 M 3 + 4 M 2 + 2 M + 4
8 b i ( n ) in Equation (46) M 2 + M M 2 + 4 M + 6
9 w ^ i ( n + 1 ) in Equation (42) 2 M M
10 w ^ ( n + 1 ) in Equation (41) M + 1 2 M
Table 2. Dominating computational costs comparisons.
Table 2. Dominating computational costs comparisons.
MethodsAddsMuls
BC-NLMS [22] 2 M 4 M
BC-NLMF [24] 2 M 5 M
BC-LMMN [26] 3 M 5 M
BC-CIM-LGMN [27] 3 M M 2
BC-CIM-PNLMS [23] 3 M 2 3 M 2
Proposed 6 M 3 6 M 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chien, Y.-R.; Hsieh, H.-E.; Qian, G. Robust Bias Compensation Method for Sparse Normalized Quasi-Newton Least-Mean with Variable Mixing-Norm Adaptive Filtering. Mathematics 2024, 12, 1310. https://doi.org/10.3390/math12091310

AMA Style

Chien Y-R, Hsieh H-E, Qian G. Robust Bias Compensation Method for Sparse Normalized Quasi-Newton Least-Mean with Variable Mixing-Norm Adaptive Filtering. Mathematics. 2024; 12(9):1310. https://doi.org/10.3390/math12091310

Chicago/Turabian Style

Chien, Ying-Ren, Han-En Hsieh, and Guobing Qian. 2024. "Robust Bias Compensation Method for Sparse Normalized Quasi-Newton Least-Mean with Variable Mixing-Norm Adaptive Filtering" Mathematics 12, no. 9: 1310. https://doi.org/10.3390/math12091310

APA Style

Chien, Y. -R., Hsieh, H. -E., & Qian, G. (2024). Robust Bias Compensation Method for Sparse Normalized Quasi-Newton Least-Mean with Variable Mixing-Norm Adaptive Filtering. Mathematics, 12(9), 1310. https://doi.org/10.3390/math12091310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop