Next Article in Journal
Earlier Decision on Detection of Ransomware Identification: A Comprehensive Systematic Literature Review
Previous Article in Journal
A Historical Handwritten French Manuscripts Text Detection Method in Full Pages
Previous Article in Special Issue
Two-Stage Convolutional Neural Network for Classification of Movement Patterns in Tremor Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Combined-Step-Size Affine Projection Andrew’s Sine Estimate for Robust Adaptive Filtering

1
Department of Automation, Jiangnan University, Wuxi 214122, China
2
Department of Electronic Engineering, Jiangnan University, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Information 2024, 15(8), 482; https://doi.org/10.3390/info15080482
Submission received: 19 July 2024 / Revised: 8 August 2024 / Accepted: 13 August 2024 / Published: 14 August 2024
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)

Abstract

:
Recently, an affine-projection-like M-estimate (APLM) algorithm has gained popularity for its ability to effectively handle impulsive background disturbances. Nevertheless, the APLM algorithm’s performance is negatively affected by steady-state misalignment. To address this issue while maintaining equivalent computational complexity, a robust cost function based on the Andrew’s sine estimator (ASE) is introduced and a corresponding affine-projection Andrew’s sine estimator (APASE) algorithm is proposed in this paper. To further enhance the tracking capability and accelerate the convergence rate, we develop the combined-step-size APASE (CSS-APASE) algorithm using a combination of two different step sizes. A series of simulation studies are conducted in system identification and echo cancellation scenarios, which confirms that the proposed algorithms can attain reduced misalignment compared to other currently available algorithms in cases of impulsive noise. Meanwhile, we also establish a bound on the learning rate to ensure the stability of the proposed algorithms.

1. Introduction

Adaptive filters are systems specifically designed to extract meaningful information from environments where a comprehensive understanding of the characteristics of the signal is not readily available. Through recursive operations, these filters compute the errors between input vectors and the desired outputs and utilize these errors to automatically adjust their filter coefficients. As a result, adaptive filters have found widespread application in various fields of signal processing, such as system identification, time series prediction, array signal processing, and interference cancellation [1].
The least mean square (LMS) and normalized LMS (NLMS) algorithms are the most studied algorithms due to their simplicity of implementation. However, when dealing with inputs that are highly correlated, the performance of these algorithms can be negatively impacted, as they typically assume that noise follows a Gaussian distribution [2,3,4,5,6].
The affine projection algorithm (APA) was derived as a way to improve the convergence rate of the NLMS algorithm for colored input signals. The key concept behind the APA is that it considers multiple adjacent input signals during each filter coefficient update. This approach enables the APA to better track changes in the signal. Unfortunately, APA-type algorithms are adversely affected by impulsive noise, which is commonly encountered in practical situations, due to their reliance on L 2 -norm optimization strategies [7,8,9,10].
To alleviate the negative consequences of the drawback, the affine projection sign (APS) algorithm was proposed by minimizing the L 1 -norm of the a posteriori error vector. However, because the APS algorithm relies solely on the sign information of the error signal, it tends to exhibit relatively significant steady-state errors. Research has been conducted to explore alternative cost functions with superior performance characteristics to achieve reduced steady-state errors. The affine-projection-like M-estimate (APLM) algorithm, introduced in [8], was developed by minimizing the cost function of the M-estimate. However, the APLM algorithm continues to encounter challenges in attaining a low steady-state misalignment, indicating the need for further investigation and refinement of cost function-based adaptive filtering techniques.
In recent years, the Andrew sine estimator (ASE) has been introduced into adaptive systems as a robust cost function. Research has demonstrated that it exhibits superior resilience and stability, even in the face of impulsive disturbances. The ASE exhibits performance comparable to that of the L 2 -norm estimator under low-power noise conditions, while demonstrating analogous behavior to that of the L 1 -norm estimator in the presence of high-power noise. Due to this characteristic, the ASE is more efficient at mitigating extreme values with long-tailed distributions. Lu et al. [11] propose an innovative approach by integrating the iterative Wiener filter (IWF) with ASE, resulting in the development of the IWF-ASE algorithm. This algorithm shows exceptional stability when confronted with impulse interference and sudden changes. However, the high performance of this algorithm is accompanied by a significant computational complexity.
In this brief, an affine projection algorithm termed Andrew’s sine estimator (APASE) is proposed, which aims to enhance the filtering capabilities specifically for scenarios involving long outliers and correlated input signals. By incorporating the robust Andrew’s sine estimator (ASE) as the cost function, APASE provides improved robustness and convergence rates. Meanwhile, the convex combination of two adaptive filters has gained attention due to its ability to harness the best features of both filters. Inspired by this, this paper proposes a combined-step-size APASE algorithm (CSS-APASE) that incorporates two distinct step sizes. The smaller step size ensures a reduced steady-state misalignment, while the larger step size is strategically employed to expedite convergence. Furthermore, to achieve globally optimal filtering performance in diverse scenarios, the auxiliary variable α ( k ) is optimally chosen by minimizing the ASE. The simulation studies, conducted in the context of system identification and echo cancellation, validate the enhanced performance of the APASE and CSS-APASE algorithms.
The subsequent section of this brief is structured in the following manner: Section 2 shows the related work. Section 3 presents the main results, including the proposed APASE and CSS-APASE algorithms, as well as the derivation of the bound on the learning rate. In Section 4, the simulation outcomes are presented, offering evidence of the proficiency and efficacy of the proposed algorithms. Finally, Section 5 provides concluding statements.

2. The Variables of the APA-Type Algorithms

Within a system identification model, the connection between input and output can be defined as
d ( k ) = u T ( k ) w o + v ( k )
where the target signal is denoted by d ( k ) , and k stands for the discrete time index. The input vector of the system at time k with an adaptive filter length of M is represented as u ( k ) = [ u ( k ) , u ( k 1 ) , , u ( k M + 1 ) ] T . The unknown M-dimensional vector that we aim to approximate is denoted by w o . Furthermore, v ( k ) represents the background noise component. The notation ( · ) T signifies the transpose operation for vectors or matrices.
At time instant k, the estimate of w o is denoted by w ( k ) = [ w 0 ( k ) , w 1 ( k ) , , w M 1 ( k ) ] T . Based on this estimate, we can express the a priori and a posteriori errors in a modified manner:
e ( k ) = d ( k ) u T ( k ) w ( k )
ε ( k ) = d ( k ) u T ( k ) w ( k + 1 )
where w ( k + 1 ) represents the estimate of w o at time k + 1 .
The APA-type algorithms update the weights based on multiple inputs, which can be expressed by U ( k ) = [ u ( k ) , u ( k 1 ) , u ( k P + 1 ) ] with the projection order P. The desired output vector can be expressed by d ( k ) = [ d ( k ) , d ( k 1 ) , , d ( k P + 1 ) ] T . Further, we define the a priori and a posteriori estimation error vectors as, respectively,
e ( k ) = d ( k ) U T ( k ) w ( k )
ε ( k ) = d ( k ) U T ( k ) w ( k + 1 )

3. The Proposed Algorithm

To achieve better convergence performance in impulsive interference environments, an affine-projection Andrew’s sine estimator (APASE) algorithm is proposed in Section 3.1. In addition, to improve the convergence speed of the APASE algorithm, a combined-step-size APASE (CSS-APASE) algorithm is developed in Section 3.2. The learning rate bound is derived in Section 3.3.

3.1. APASE Algorithm

The ASE demonstrates outstanding performance in trimming outliers that have long-tailed distributions. Selecting it as the robust cost function seems to be a wise decision. The APASE is proposed by minimizing the ASE of the a posteriori error vector under a constraint:
min w ( k + 1 ) i = 0 P 1 A ( ε ( k i ) )
subject to w ( k + 1 ) w ( k ) 2 2 μ 2
where μ 2 serves as a crucial constraint parameter to impose a limit on the magnitude of changes made to the adaptive weight vector w ( k ) . The constraint effectively mitigates excessive fluctuations, thereby enhancing stability throughout the adaptation process. Meanwhile, μ can also denote the updated factor. A ( ) denotes the robust ASE, given by [11].
A ( e ) 4 sin 2 e 2 c , | e | π c 4 , | e | > π c
where c > 0 is a constant that can be used to manipulate the configuration of the cost function. A lower value of c increases the sensitivity of ASE but narrows the range of e values that the algorithm can handle effectively. When e exceeds this range, the behavior of the ASE changes to resemble that of the L 1 -norm. In contrast, selecting a high value of c reduces the algorithm’s sensitivity to small e values but enables it to maintain a broader operational range.
To tackle the constrained optimization problem, we utilize the method of Lagrange multipliers. The principle lies in augmenting the original objective function with a linear combination of equality constraints, each multiplied by a corresponding Lagrange multiplier τ . By differentiating this augmented function and setting its gradients equal to zero, we obtain a system of equations that encapsulates both the optical conditions of the original problem and the constraints. In comparison to utilizing artificial neural networks and genetic algorithms for solving constrained optimization problems [12], the proposed method exhibits a notable simplicity. The unconstrained function is described as
J [ w ( k + 1 ) ] = i = 0 P 1 A ( ε ( k i ) ) + τ w ( k + 1 ) w ( k ) 2 2 μ 2
Evaluating the partial derivative of Equation (9) considering w ( k + 1 ) , we obtain
J [ w ( k + 1 ) ] w ( k + 1 ) = U ( k ) i = 0 P 1 ψ ( ε ( k i ) ) + 2 τ [ w ( k + 1 ) w ( k ) ]
where ψ ( ) = A ( ) / ( ) , given by
ψ ( e ) = A ( e ) / e = 2 c sin e c , | e | π c 0 , | e | > π c
Equating the aforementioned partial derivative to 0, we obtain
w ( k + 1 ) = w ( k ) + 1 2 τ U ( k ) i = 0 P 1 ψ ( ε ( k i ) )
Inserting Equation (12) into Equation (7), we obtain
1 2 τ = μ U ( k ) i = 0 P 1 ψ ( ε ( k i ) ) 2
Substituting Equation (13) into Equation (12), we derive the APASE algorithm.
w ( k + 1 ) = w ( k ) + i = 0 P 1 μ U ( k ) [ 2 c sin ε ( k i ) c ] U ( k ) [ 2 c sin ε ( k i ) c ] 2 , | ε ( k i ) | π c 0 , | ε ( k i ) | > π c
However, a posteriori errors ε ( k i ) , i = 0 , 1 , , P 1 are inaccessible at time k because they involve future information. To obtain an effective APASE algorithm, the a priori errors e ( k i ) , i = 0 , 1 , , P 1 are used to approximate the a posteriori errors as follows:
w ( k + 1 ) = w ( k ) + i = 0 P 1 μ U ( k ) [ 2 c sin e ( k i ) c ] U ( k ) [ 2 c sin e ( k i ) c ] 2 + δ , | e ( k i ) | π c 0 , | e ( k i ) | > π c = w ( k ) + i = 0 P 1 μ U ( k ) ψ ( e ( k i ) ) U ( k ) ψ ( e ( k i ) ) 2 + δ = w ( k ) + μ U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
where ψ ( e ( k ) ) = [ ψ ( e ( k ) ) , ψ ( e ( k 1 ) ) , , ψ ( e ( k P + 1 ) ) ] and 0 < δ < < 1 denotes the regularization coefficient.

3.2. CSS-APASE Algorithm

In this part, to further improve the convergence rate and reduce the steady-state misalignment of the APASE algorithm, a combined step-size APASE algorithm (CSS-APASE) is developed. Assuming the scenario where the weight vector w ( k ) undergoes an update process that involves both a comparatively large step size, denoted as μ 1 , and a significantly smaller step size, labeled as μ 2 , the update procedure can be formulated as
w 1 ( k + 1 ) = w ( k ) + μ 1 U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
w 2 ( k + 1 ) = w ( k ) + μ 2 U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
where 0 < μ 2 < μ 1 . Then, introducing a mixing factor λ ( k ) to combine w 1 ( k + 1 ) and w 2 ( k + 1 ) , we have
w ( k + 1 ) = λ ( k + 1 ) w 1 ( k + 1 ) + ( 1 λ ( k + 1 ) ) w 2 ( k + 1 )
where 0 λ ( k + 1 ) 1 . Replacing the relevant terms in Equation (18) with the expressions derived from Equations (16) and (17), we obtain
w ( k + 1 ) = w ( k ) + λ ( k + 1 ) μ 1 + ( 1 λ ( k + 1 ) ) μ 2 × U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
where λ ( k + 1 ) μ 1 + ( 1 λ ( k + 1 ) ) μ 2 represents the combined step size and ranges from a high value μ 1 to a lower value μ 2 .
To enhance the robustness and nonlinear properties of the algorithm, the mixing factor λ ( k ) is formulated by employing a sigmoidal activation function,
λ ( k + 1 ) = sgm ( α ( k + 1 ) ) = 1 1 + e α ( k + 1 )
where the auxiliary variable α ( k ) is recursively updated by minimizing the robust Andrew’s sine estimator of the total filter,
α ( k + 1 ) = α ( k ) μ α A ( e ( k ) ) α ( k ) = α ( k ) + μ α u T ( k ) w ( k ) α ( k ) ψ ( e ( k ) ) = α ( k ) + μ α u T ( k ) ψ ( e ( k ) ) μ 1 μ 2 λ ( k ) α ( k ) × U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ = α ( k ) + μ α ψ ( e ( k ) ) μ 1 μ 2 λ ( k ) ( 1 λ ( k ) ) × u T ( k ) U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
where μ α denotes the step size.
Given that α ( k ) cannot attain or in Equation (20), it is inherently impossible for λ ( k ) to reach 0 or 1. Consequently, the combined step size cannot achieve both the minimal steady-state error associated with μ 2 and the rapid convergence rate associated with μ 1 . To address this challenge, Equation (20) is revised as follows:
λ ( k + 1 ) = 0 , if α ( k + 1 ) < ln C + 1 C 1 1 , if α ( k + 1 ) > ln C + 1 C 1 C 1 + e α ( k + 1 ) C 1 2 , otherwise
where C > 1 . Furthermore, the updating process of α ( k ) encounters halting when λ ( k ) reaches either 0 or 1, posing a significant challenge to the overall stability and convergence of the algorithm. To tackle it, Equation (21) is revised as
α ( k + 1 ) = α ( k ) + μ α ψ ( e ( k ) ) μ 1 μ 2 [ λ ( k ) ( 1 λ ( k ) ) + θ ] × u T ( k ) U ( k ) ψ ( e ( k ) ) U ( k ) ψ ( e ( k ) ) 2 + δ
where θ > 0 denotes a constant.

3.3. Stability Analysis

The choice of the learning rate is of utmost importance for ensuring the algorithm’s stability. The learning rate significantly impacts the algorithm’s convergence and dictates the steady-state misalignment. This section focuses on establishing the limit for the learning rate in the proposed algorithms. The weight update rule expressed in Equation (15) can be reformulated as follows:
w ( k + 1 ) = w ( k ) + μ X ( k ) B ( k ) f ( e ( k ) )
where B ( k ) = U ( k ) ψ ( e ( k ) ) 2 + δ 1 , f ( e ( k ) ) represents the error function. Using (11), the function f ( e ( k ) ) can be easily calculated as
f ( e ( k ) ) = 1 sin ( e ( k ) π c ) 2 1 + sin ( 2 π c ) 2 2 c sin ( e ( k ) c ) .
By subtracting w 0 from both sides of Equation (7), we have
w e ( k + 1 ) = w e ( k ) μ X ( k ) B ( k ) f ( e ( k ) )
where w e ( k ) = w 0 w ( k ) . The apriori error, e a ( k ) , is defined as
e a ( k ) = X T ( k ) w e ( k )
By taking the expected value on both sides of Equation (9), we have
w ¯ e ( k + 1 ) = w ¯ e ( k ) 2 μ E e a ( k ) B ( k ) f ( e ( k ) ) + μ 2 E f T ( e ( k ) ) × f ( e ( k ) ) B ( k ) × X T ( k ) X ( k ) B ( k )
where w ¯ e ( k ) = E w e ( k ) 2 . In order to ensure the stability of the mean, it is necessary that the step size μ should also be satisfied:
0 < μ < 2 E e a ( k ) B ( k ) f ( e ( k ) ) E f T ( e ( k ) ) f ( e ( k ) ) B ( k ) X T ( k ) X ( k ) B ( k )
It is essential to note that the symbol μ should be selected within the range of 0 to 1 to maintain the stability of the proposed algorithms and achieve a high level of filtering accuracy.

4. Experiment and Analysis

In this section, a comprehensive set of computer-based simulations are executed to analyze the effectiveness of the APASE and CSS-APASE algorithms in the domain of system identification and echo cancellation. To quantify the performance, the normalized mean squared deviation (NMSD) and excess mean squared error (EMSE) are used, which are defined as NMSD = 10 log 10 w o w ( k ) 2 w o 2 and EMSE = 10 log 10 e ( k ) v ( k ) 2 .
The presented results are derived from the ensemble average of more than 100 independent trials. The projection order P is fixed at 8 [13]. Based on 15, we set δ = 0.0001 < < 1 to avoid division by zero. Furthermore, to prevent the update of α ( k ) from stagnating, we set θ = 0.00001 < < 1 [14]. Following multiple iterations and a thorough review of the relevant literature [15], the optimal combined step size parameters are identified and established as follows: μ α = 0.1 , α ( 0 ) = 8 , λ ( 0 ) = 1 , and C = 2 . Moreover, in subsequent experiments, the primary parameters μ and c are set to their optimal values under the condition that all algorithms maintain approximately the same convergence rate. These optimal values are also determined through extensive experimentation.

4.1. System Identification Application

The unknown vector w o is randomly generated with 256 taps ( M = 256 ) and, accordingly, an adaptive filter of the same length is designed. The input signal utilized in the experiment is a colored signal that is synthesized using a zero-mean white Gaussian signal, which has a variance of σ x 2 = 1 . This colored signal is further modulated by an autoregressive (AR) process, which is characterized by a pole located at 0.95.
In Experiment 1, we assessed the impact of the main parameter c on the performance of the proposed CSS-APASE algorithm. To achieve this, we set c { 1.5 , 2 , 2.5 , 3 , 3.5 } . The background noise of the system was modeled as white Gaussian noise (WGN) with a signal-to-noise ratio (SNR) of 30 dB [1]. As depicted in Figure 1, when c = 1.5 , the system failed to attain an optimized final result within 100,000 iterations. Conversely, when c = 3 , further increments in c had minimal effects on the algorithm’s performance. However, at c = 2.5 , the algorithm achieved a favorable balance between convergence speed and tracking capability. Although a slower convergence rate was observed compared to higher values of c, this setting significantly mitigated the degree of misalignment. In comparison to lower c values, it substantially enhanced tracking speed. Consequently, we opted to use c = 2.5 in system identification.
In Experiment 2, a component of WGN with an SNR of 30 dB was superimposed on the output signal. Figure 2 presents the learning curves for the various algorithms considered. As evident in the plot, the APA algorithm exhibited the smallest NMSD among all the algorithms evaluated. However, this excellent performance came with a high computational cost. In contrast, the proposed APASE algorithm achieved a slightly higher NMSD, specifically 2 dB more than the APA, but it boasted a significant reduction in computational complexity. This trade-off makes APASE a more practical choice in scenarios where real-time or low-latency performance is crucial. Furthermore, the proposed CSS-APASE algorithm pushed the boundaries even further, achieving an NMSD that was even closer to the APA, demonstrating superior performance while maintaining a lower computational load. Additionally, it is noteworthy that the APASE algorithm significantly outperformed both the APS and APLM algorithms in terms of the NMSD.
In Experiment 3, the system output was subjected to distortions caused by the introduction of impulsive noise, along with the aforementioned Gaussian noise component. Impulsive noise was modeled through a Bernoulli process η ( k ) = z ( k ) A ( k ) , where A ( k ) means a zero-mean white Gaussian noise with variance σ A 2 . Here, z ( k ) is a Bernoulli random variable with probabilities p z ( k ) = 1 = p r and p z ( k ) = 0 = 1 p r . For this experiment, we assigned p r a value of 0.01, and the variance σ A 2 was set to be ten times the expected squared output of the system, which is given by E u T ( k ) w o 2 [13]. Figure 3 presents the results of this experiment. As can be seen, the APASE algorithm exhibits a significantly lower NMSD compared to the APA, APS, and APLM algorithms. Furthermore, the CSS-APASE algorithm effectively combines the rapid convergence rate of the μ 1 filter with the small steady-state misalignment of the μ 2 filter, making it a more efficient choice among the proposed algorithms.
In Experiment 4, the performance of the proposed algorithms was evaluated in the presence of impulsive noise and uniformly distributed noise. The uniformly distributed noise ranged over the interval ( 1 , 1 ) with an SNR of 20 dB. Figure 4 presents the experimental results, clearly showing that the APASE and CSS-APASE algorithms outperformed the other algorithms in various noise environments, demonstrating their robust performance.

4.2. Echo Cancellation Application

In Experiment 5, the acoustically measured echo path was characterized by an impulse response that consisted of M = 256 taps. In addition, a genuine speech signal served as the colored input. These two elements are individually presented in Figure 5 for clarity.
Figure 6 displays the NMSD curves of these algorithms. It is clearly observed that the convergence performance of the APASE algorithm is much better than that of the APA, APS, and APLM algorithms in terms of both convergence rate and steady-state misalignment. In addition, compared to the APASE algorithm, the CSS-APASE algorithm shows superior performance.
In Experiment 6, we replicated Experiment 5 but applied it to the AR(1)-related signal [16]. As can be seen in Figure 7, both traditional APA and APSA algorithms exhibit significant misalignment under such conditions. Meanwhile, while the performance of the APLM algorithm shows a relative improvement, the proposed algorithms, APASE and CSS-APASE, exhibit superior performance compared to APLM.

5. Conclusions

In this article, we introduce an affine-projection Andrew’s sine estimator (APASE) algorithm designed to enhance convergence performance specifically for colored input signals. By minimizing Andrew’s sine estimator of the a posteriori error vector while imposing a constraint on the filter coefficients, we derive a novel algorithm that exhibits remarkable robustness against impulse noise. Moreover, to further optimize the performance of the APASE algorithm, we develop the CSS-APASE algorithm. The proposed algorithms demonstrate superior robustness, faster convergence speeds, and reduced steady-state misalignment in various applications, including system identification and echo cancellation. These enhancements are achieved while outperforming the APS and APLM algorithms and traditional APA. Additionally, we establish a constraint on the learning rate to ensure the stability of our proposed algorithms, providing an added layer of reliability in their practical deployment.
In future work, the potential of the APASE could be further exploited in choosing a suitable projection order. Several preliminary studies have been conducted, showing that appropriate use of the projection order can better balance steady-state misalignment and the convergence rate [17,18,19]. With this background, we aim to introduce a novel approach that automatically adjusts the projection order, thereby transcending the conventional paradigm of treating it as a static parameter. Additionally, we intend to evaluate the performance of both the APASE and CSS-APASE algorithms within realistic experimental frameworks, focusing on applications such as acoustic echo cancellation and communications.

Author Contributions

Conceptualization, W.W.; Software, Y.W.; Methodology, Y.W.; Formal analysis, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
APAAffine Projection Algorithm
APASEAffine-Projection Andrew’s Sine Estimator
APLMAffine-Projection-Like M-Estimate
APSAffine Projection Sign
ARAutoregressive
ASEAndrew’s Sine Estimator
CSSCombined Step Size
CSS-APASECombined-Step-Size Affine-Projection Andrew’s Sine Estimate
EMSEExcess Mean Squared Error
IWFIterative Wiener Filter
IWF-ASEIterative Wiener Filter Andrew’s Sine Estimator
LMSLeast Mean Squares
NLMSNormalized Least Mean Squares
NMSDNormalized Mean Squared Deviation
SNRSignal-to-Noise Ratio
WGNWhite Gaussian Noise

References

  1. Wang, W.; Sun, Q. Robust Adaptive Estimation of Graph Signals Based on Welsch Loss. Symmetry 2022, 14, 426. [Google Scholar] [CrossRef]
  2. Wang, S.; Wang, W.; Xiong, K.; Iu, H.H.; Chi, K.T. Logarithmic hyperbolic cosine adaptive filter and its performance analysis. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 2512–2524. [Google Scholar] [CrossRef]
  3. Sayed, A.H. Fundamentals of Adaptive Filtering; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  4. Huang, X.; Li, Y.; Han, X.; Tu, H. Lawson-norm-based adaptive filter for channel estimation and in-car echo cancellation. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 2376–2380. [Google Scholar] [CrossRef]
  5. Haykin, S.; Widrow, B. Least-Mean-Square Adaptive Filters; Wiley Online Library: Hoboken, NJ, USA, 2003. [Google Scholar]
  6. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU-Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  7. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. Jpn. Part I: Commun. 1984, 67, 19–27. [Google Scholar] [CrossRef]
  8. Song, P.; Zhao, H. Affine-Projection-Like M-Estimate Adaptive Filter for Robust Filtering in Impulse Noise. IEEE Trans. Circuits Syst. II Express Briefs 2019, 66, 2087–2091. [Google Scholar] [CrossRef]
  9. Zhang, Q.; Wang, S.; Lin, D.; Chen, S. Robust Affine Projection Tanh Algorithm and Its Performance Analysis. Signal Process. 2023, 202, 108749. [Google Scholar] [CrossRef]
  10. Li, G.; Wang, G.; Dai, Y.; Sun, Q.; Yang, X.; Zhang, H. Affine projection mixed-norm algorithms for robust filtering. Signal Process. 2021, 187, 108153. [Google Scholar] [CrossRef]
  11. Lu, L.; Yu, Y.; Zheng, Z.; Zhu, G.; Yang, X. Robust Andrew’s sine estimate adaptive filtering. arXiv 2023, arXiv:2303.16404. [Google Scholar]
  12. Tsoulos, I.G.; Stavrou, V.; Mastorakis, N.E.; Tsalikakis, D. GenConstraint: A programming tool for constraint optimization problems. SoftwareX 2019, 10, 100355. [Google Scholar] [CrossRef]
  13. Kumar, K.; Karthik, M.L.N.S.; Bhattacharjee, S.S.; George, V.N. Affine Projection Champernowne Algorithm for Robust Adaptive Filtering. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 1947–1951. [Google Scholar] [CrossRef]
  14. Yu, Y.; Zhao, H. Novel combination schemes of individual weighting factors sign subband adaptive filter algorithm. Int. J. Adapt. Control Signal Process. 2017, 31, 1193–1204. [Google Scholar] [CrossRef]
  15. Huang, F.; Zhang, J.; Zhang, S. Combined-Step-Size Affine Projection Sign Algorithm for Robust Adaptive Filtering in Impulsive Interference Environments. IEEE Trans. Circuits Syst. II Express Briefs 2016, 63, 493–497. [Google Scholar] [CrossRef]
  16. Zhao, H.; Yu, Y.; Gao, S.; Zeng, X.; He, Z. A new normalized LMAT algorithm and its performance analysis. Signal Process. 2014, 105, 399–409. [Google Scholar] [CrossRef]
  17. Zheng, Z.; Liu, Z. Steady-State Mean-Square Performance Analysis of the Affine Projection Sign Algorithm. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 2244–2248. [Google Scholar] [CrossRef]
  18. Zhao, J.; Ni, X.; Li, Q.; Tang, L.; Zhang, H. Evolving Order Based Affine Projection Sign Algorithm For Enhanced Adaptive Filtering. IEEE Signal Process. Lett. 2024, 31, 1530–1534. [Google Scholar] [CrossRef]
  19. Yukawa, M.; Yamada, I. Pairwise optimal weight realization—Acceleration technique for set-theoretic adaptive parallel subgradient projection algorithm. IEEE Trans. Signal Process. 2006, 54, 4557–4571. [Google Scholar] [CrossRef]
Figure 1. Comparison of different c values in system identification: (a) NMSD; (b) EMSE.
Figure 1. Comparison of different c values in system identification: (a) NMSD; (b) EMSE.
Information 15 00482 g001
Figure 2. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms in system identification with white Gaussian noise: (a) NMSD; (b) EMSE.
Figure 2. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms in system identification with white Gaussian noise: (a) NMSD; (b) EMSE.
Information 15 00482 g002
Figure 3. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms with both impulsive noise and white Gaussian noise: (a) NMSD; (b) EMSE.
Figure 3. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms with both impulsive noise and white Gaussian noise: (a) NMSD; (b) EMSE.
Information 15 00482 g003
Figure 4. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms with both impulsive noise and uniformly distributed noise: (a) NMSD; (b) EMSE.
Figure 4. Comparison of APA, APSA, APLM, APASE, and CSS-APASE algorithms with both impulsive noise and uniformly distributed noise: (a) NMSD; (b) EMSE.
Information 15 00482 g004
Figure 5. (a) Impulse response of echo path. (b) Speech input signal.
Figure 5. (a) Impulse response of echo path. (b) Speech input signal.
Information 15 00482 g005
Figure 6. Comparison of the NMSD (dB) of APA, APSA, APLM, APASE, and CSS-APASE algorithms in echo cancellation for speech input signal.
Figure 6. Comparison of the NMSD (dB) of APA, APSA, APLM, APASE, and CSS-APASE algorithms in echo cancellation for speech input signal.
Information 15 00482 g006
Figure 7. Comparison of the NMSD (dB) of APA, APSA, APLM, APASE, and CSS-APASE algorithms in echo cancellation for AR(1)-related signal.
Figure 7. Comparison of the NMSD (dB) of APA, APSA, APLM, APASE, and CSS-APASE algorithms in echo cancellation for AR(1)-related signal.
Information 15 00482 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wan, Y.; Wang, W. Combined-Step-Size Affine Projection Andrew’s Sine Estimate for Robust Adaptive Filtering. Information 2024, 15, 482. https://doi.org/10.3390/info15080482

AMA Style

Wan Y, Wang W. Combined-Step-Size Affine Projection Andrew’s Sine Estimate for Robust Adaptive Filtering. Information. 2024; 15(8):482. https://doi.org/10.3390/info15080482

Chicago/Turabian Style

Wan, Yuhao, and Wenyuan Wang. 2024. "Combined-Step-Size Affine Projection Andrew’s Sine Estimate for Robust Adaptive Filtering" Information 15, no. 8: 482. https://doi.org/10.3390/info15080482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop