Next Article in Journal
Estimation of Lane-Level Traffic Flow Using a Deep Learning Technique
Next Article in Special Issue
A Feasibility Study of an ESG to Suppress Road Noise of a Car
Previous Article in Journal
Mechanics of Screw Joints Solved as Beams Placed in a Tangential Elastic Foundation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Alternative Approach to Obtain a New Gain in Step-Size of LMS Filters Dealing with Periodic Signals

by
Pedro Ramos Lorente
*,
Raúl Martín Ferrer
,
Fernando Arranz Martínez
and
Guillermo Palacios-Navarro
Department of Electronic Engineering and Communications, University of Zaragoza, 44003 Teruel, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(12), 5618; https://doi.org/10.3390/app11125618
Submission received: 6 May 2021 / Revised: 14 June 2021 / Accepted: 15 June 2021 / Published: 17 June 2021
(This article belongs to the Special Issue Latest Advances in Active Noise Control)

Abstract

:
Partial updates (PU) of adaptive filters have been successfully applied in different contexts to lower the computational costs of many control systems. In a PU adaptive algorithm, only a fraction of the coefficients is updated per iteration. Particularly, this idea has been proved as a valid strategy in the active control of periodic noise consisting of a sum of harmonics. The convergence analysis carried out here is based on the periodic nature of the input signal, which makes it possible to formulate the adaptive process with a matrix-based approach, the periodic least-mean-square (P-LMS) algorithm In this paper, we obtain the upper bound that limits the step-size parameter of the sequential PU P-LMS algorithm and compare it to the bound of the full-update P-LMS algorithm. Thus, the limiting value for the step-size parameter is expressed in terms of the step-size gain of the PU algorithm. This gain in step-size is the quotient between the upper bounds ensuring convergence in the following two scenarios: first, when PU are carried out and, second, when every coefficient is updated during every cycle. This step-size gain gives the factor by which the step-size can be multiplied so as to compensate for the convergence speed reduction of the sequential PU algorithm, which is an inherently slower strategy. Results are compared with previous results based on the standard sequential PU LMS formulation. Frequency-dependent notches in the step-size gain are not present with the matrix-based formulation of the P-LMS. Simulated results confirm the expected behavior.

1. Introduction

The least mean-square (LMS) [1,2,3] is an adaptive algorithm where a simplification of the gradient vector computation is carried out by means of an appropriate modification of the goal function. The LMS algorithm is widely used in different applications due to its computational simplicity. Very different fields of knowledge such as underwater communications [4] or ultrawide bandwidth systems [5] make use of the LMS algorithm to optimize an objective function by iteratively minimizing the error signal. Apart from the classical performance analysis of the LMS algorithm, one may find recent relevant references about stochastic analysis of the LMS algorithm for non-Gaussian [6], white Gaussian [7], and colored Gaussian [8] cyclostationary input signals.
In this paper we analyze the convergence process of the LMS filter, under the assumption of deterministic periodic input. The periodic nature of the reference (also referred to as regressor or input) signal x n and the training signal d n allows us to use the matricial approach of the LMS algorithm proposed by Parra et al. [9], periodic least-mean-square (P-LMS) algorithm, in the following. The referenced paper is based on a previous work [10] where a matrix-based approach is proposed to analyze the stability of adaptive algorithms.
Active noise control (ANC) is a well-known strategy widely used to attenuate acoustic disturbances by means of controllable secondary loudspeakers. The output of these secondary sound sources is arranged so as to destructively interfere with the acoustic noise from the primary source. The basic idea behind ANC systems has been reported for the last decades [11,12,13,14], and it still is a research field with continuous dissemination of interesting results [15,16,17,18,19].
ANC systems’ efficiency has been demonstrated by many researchers who have published complete references on the topic [20,21,22]. Active techniques, therefore, are generally considered a valid strategy of overcoming passive systems’ performance in certain cases.
Many applications of ANC have to deal with periodic disturbances consisting of several harmonics [23,24]. The reader might take into account that narrowband active noise control systems may attenuate a great variety of noises, e.g., those generated by compressors, turbines, engines, or fans. In [25], Jafari et al. proposed a robust adaptive strategy for the rejection of periodic components of a disturbance and analyzed its performance and stability properties.
One may find in the literature numerous examples of adaptive algorithms derived from the conventional LMS aimed at improving their performance by dealing with the step-size parameter. For instance, Han-sol et al. proposed [26] a variable step-size LMS strategy that achieves a fast convergence rate and low mis-adjustment by improvement of the updating stage.
Besides, system identification is a common task in many application fields. If systems have to be estimated using sparse computation due to computational complexity constraints, partial updates (PU) of the coefficients turn out to be an optimal option. Thus, algorithms exploiting sparseness in the coefficient update domain, where most of the weights are small and only a few are relevant, are often based on variations of the LMS adaptive algorithm [27,28].
In this sense, gain in step-size [29] is a parameter defined in the context of periodic primary noise attenuation when PU of the weights of the filter are used to lower the computational cost of an adaptive algorithm [30,31,32].
The existence of a gain in step-size (that depends on the frequency, the number of coefficients of the filter M, and the decimating factor N) was already proved—theoretically and experimentally, in previous works [29] when sequential partial updates (Seq PU) were applied to a filter controlled by the LMS algorithm.
In this paper, we want to verify if there is also a ratio between the step-size bounds when N > 1 (Seq. PU P-LMS) and N = 1 (P-LMS), if the LMS is applied by the matrix-based approach proposed by Parra el al. [9] for the case of periodic input signals that we refer to as P-LMS. In this latter case, the convergence analysis in not based on the eigenvalues from the autocorrelation matrix of the input signal, but on the so-called stability matrix.

2. Materials and Methods

In this second section, we show an overview of the algorithmic proposals. Theoretical basis and convergence analysis are provided.

2.1. Matricial Formulation of the Periodic LMS Algorithm

We begin this subsection bringing the matrix-based algorithmic proposal of Parra et al. [9] before reducing its computational complexity in Section 2.2 by means of PU.
Let w n be an LMS filter with periodic reference x n and training signal d n , both of period P.
Defining the corresponding M-length vectors, we have
w n = w 0 n   w 1 n     w M 1 n T
x n = x n   x n 1     x n M + 1 T
Here, the standard LMS updating process of the coefficients is given by the well-known recursion
w n + 1 = w n + μ   e n x n
with μ being the step-size parameter and e n being the error. The error is defined as the difference between the training signal d n (often referred to as desired response) and the output of the filter.
e n = d n w T n x n .
Iterating in Equation (3) the LMS algorithm in the interval given by l ,   l + 1 , ,   n 1 ,   n , we have
w n = w l + μ   j = l n 1 e j x j
and
d n = e n + w T l x n = w l + μ   j = l n 1 e j   x T j   x n
Due to the P-periodicity of the input signals x n and d n , one may consider separately complete P-periods of the signals. Particularly, during the k t h period k = 0 ,   1 ,   2 , , we have:
  • the vector from the training signal: d k = d k P     d k + 1 P 1 T
  • the vector from the error: e k = e k P     e k + 1 P 1   T
  • and, finally, the estimated response vector: r k = d k e k
Additionally, because d n is periodic, the training signal vector remains invariant during all periods, that is, d k = d 0 = d ,   k .
Substituting l = k P into Equation (6), at every period, we have the following matrix identity
d k = d = X   w k P + Q   e k
and, we also have
w n = w k P + μ   X n k P T   e k
that is valid in the interval k P n k + 1 P .
X is a P-by-M matrix given by:
X = x 0 x P 1 x P M + 1 x 1 x 0 x P M + 2 x P 1 x P 2 x P M
For each value of k , the related matrix X n k P is the P-by-M matrix where its first n k P rows are equal to the first n k P rows of X , whereas the remaining ones are null.
The square matrix Q is defined as
Q = I + μ T
where I is the P-order identity matrix and T is the (strictly) lower triangular matrix given by
T = 0 0 0 x T 0 x 1 0 0 x T 0 x 2 x T 1 x 2 0 x T 0 x P 1 x T 1 x P 1 0
Q 1 is the sensitivity matrix. From d k = d = X   w k P + Q   e k we can obtain e k as
e k = Q 1 d X   w k P
and substituting into Equation (8), which is particularized for n = k + 1 P , gives that a block of P-iterations updates the LMS filter according to [9]
w k + 1 P = w k P μ   G   w k P + μ   t
where
G = X T Q 1 X
and
t = X T Q 1 d

2.2. Sequential Partial Updates (PU) Applied to the Periodic LMS Algorithm

The sequential partial updates (Seq. PU) LMS algorithm updates a subset of M/N coefficients per iteration, out of a total of M weights according to Equation (16).
w l n + 1 = w l n + μ x n l + 1 e n i f   n l + 1 mod N = 0 w l n o t h e r w i s e
for 1 l M , where w l n represents the l -th coefficient of the filter, N the decimation factor of the PU strategy, μ the step-size parameter of the algorithm, x n the input (often named regressor) signal, and e n the error signal. Thus, the computational complexity of the Seq. PU algorithm is reduced directly as N increases [30].
The Seq. PU strategy expressed by Equation (16) is summarized in Figure 1. Here, the weights to be updated at every cycle are determined, as well as the related samples of the regressor signal x n . In this scheme, we assume that the first update is carried out at the first cycle, and the current value of the input signal is x n . From Equation (16) and Figure 1, one can surmise that the current value given by x n is used to update the first N filter’s taps during the upcoming N cycles. In general, in a full-update adaptive algorithm, it is necessary to renew the input vector at every cycle with a new sample of x n . Nevertheless, according to Figure 1, the Seq. LMS adaptive algorithm makes use of every N-th position of the input vector. Hence, it is enough to obtain a new sample at just one out of N iterations.
Then, one can consider that the whole filter of M coefficients is made up of N logical subfilters of (M/N) coefficients. These logical subfilters come from the process of sampling uniformly the weights of the original M-length filter with a sampling factor of N positions. To illustrate this idea, the weights of the first logical subfilter are marked with a circle in Figure 1. The weights that are placed at the same relative position in every logical subfilter are updated with the same value of the regressor signal x n . This regressor signal is renewed just in 1 out of N cycles. Then, after N cycles, a new value of x n is sampled and taken to renew the first coefficient of each subfilter, whereas the oldest value of x n is shifted out of its active range. Summarizing, during N consecutive cycles, N (M/N)-length logical subfilters are updated with the same regressor vector. This regressor signal is an N-decimated version of x n . Therefore, we analyze the convergence of the M-length filter on the basis of the parallel convergence of N (M/N)-length subfilters updated by an N-decimated input x n .
In order to use this PU strategy for the P-LMS algorithm, one should decimate the input signal and use a shorter version of the matrices involved in the updating process, because the active number of coefficients per iteration results in a filter of M coefficients.
M = M N
If x n is the P-periodic signal, then the N-decimated signal x d n
x d n = x N n
is also periodic, of period P , with
P =   P N i f   P N   i s   i n t e g e r   P i f   P N   i s   n o t   i n t e g e r  
X d is a P′-by-M′ matrix, obtained by substituting in Equation (9) x n by its N-decimated version x d n , the period P by P , and the number of coefficients M by M .
The square matrix Q d is defined as
Q d = I d + μ T d
with I d being the P’-order identity matrix and T d the lower triangular matrix obtained by substituting in Equation (11) x n by its N-decimated version x d n and the period P by P . At every iteration, a logical M/N-length subfilter is updated from the whole set of taps according to the following sampling process of the coefficients
f o r   i n d e x = 1   t o   N 1           w d = w f r o m   i n d e x   t o   e n d   M ; s t e p   N             e n d .
The M/N coefficients that form the iteration-dependent logical subfilter are shown in Table 1.
The entire filter is applied to obtain the current error
e k = Q 1 d X   w k P
but only one out of every N coefficients is updated according to
w d k + 1 P = w d k P + μ     G d   w d k P + μ   t d
where
G d = X d T Q d 1 X d
and
t d = X d T Q d 1 d d
where the N-decimated version d d of the training signal d is sampled as
f o r   i n d e x = 1   t o   N 1           d d = d f r o m   i n d e x   t o   e n d   M ; s t e p   N             e n d

2.3. Gain in Step-Size of the Periodic LMS Algorithm with Sequential Partial Updates

The approach followed in this part of the paper is in accordance with that of Parra et al. [9], where the authors define a sensitivity matrix. The sensitivity matrix is built from a matricial arrangement of a period of the regressor/input signal. From the sensitivity matrix one may obtain the stability matrix S [10],
S = I μ G
that controls the convergence of the LMS.
A simple criterion of the filter convergence is
0 < μ < 2 cos θ ρ .
where λ M = ρ e j θ is the largest eigenvalue of matrix G minimizing the quotient cos θ / ρ , with G related to the stability matrix S according to Equation (27).
Here, we have applied sequential PU of the coefficients of the M-length adaptive filter to lower the computational complexity. In so doing, the decimating factor N of the sequential PU strategy reduces the number of operations because only M/N coefficients are updated per cycle.
Then, the ratio between the bounds on the step-size parameter when N > 1 (Seq. PU P-LMS) and N = 1 (P-LMS), that we name gain in step-size, is given by
G μ = bound μ SeqPU P LMS bound μ P LMS = min 2 cos θ SeqPU P LMS ρ SeqPU P LMS min 2 cos θ P LMS ρ P LMS .
With the subindex P L M S , we refer to the full-update periodic LMS proposed by Parra et al. [9], whereas the S e q P U P L M S subindex is used to denote the sequential PU strategy proposed in Section 2.2.

2.4. Gain in Step-Size of Standard Version of the Sequential PU LMS Algorithm

In previous works [29] we have carried out a similar analysis on the basis of the eigenvalues of the autocorrelation matrix of the input signal. As we see, the shape of the gain in step-size has nothing to do in both cases (LMS vs. P-LMS).

Eigenvalues of the Autocorrelation Matrix of a P-Periodic Signal Composed of K Pure Harmonics

Let us assume that the input signal x n of an adaptive filter is defined as follows.
x n = k = 1 K C k c o s ( 2 π k f 0 n + φ k )
where f 0 is the fundamental frequency normalized by the sampling rate and ϕ k k = 1 , , K are the initial random phases. Phases are uniformly distributed from 0 to 2π radians and mutually independent. Finally, C k k = 1 , , K are the amplitudes of the harmonics. The autocorrelation function of input signal x n can be expressed as
r x x τ = k = 1 K C k 2 2 cos ( 2 π k f 0 τ )
Therefore, the autocorrelation matrix of the input vector x ( n ) can be expressed as the sum of K matrices R k of size M × M as follows.
R = k = 1 K C k 2 R k
where
R k = 1 2 1 cos 2 π k f 0 cos 2 π k M 1 f cos 2 π k f 0 1 cos 2 π k f 0 cos 2 π k M 1 f cos 2 π k f 0 1
The largest eigenvalue λ k , m a x k f 0 of each matrix R k is given by [33]
λ k , m a x k f 0 = max 1 4 M ± sin M 2 π k f 0 sin 2 π k f 0
where the subscript k refers to the index of the submatrix R k .
Let us consider a matrix made up of a sum of matrices of the same dimensions. According to the triangle inequality [34], appendix E, we have that the sum of the largest eigenvalues of every matrix from the sum of matrices establishes the bound for the largest eigenvalue of the matrix defined as a sum of matrices. As a result of that, the bound of the largest eigenvalue of R , which we refer to as λ t o t , m a x , is limited according to
λ t o t , m a x k = 1 K C k 2 λ k , m a x k f 0 =   k = 1 K C k 2 max 1 4 M ± sin M 2 π k f 0 sin 2 π k f 0  
where the subscript tot is used to refer to the autocorrelation matrix R [29].
As far as the Seq. PU LMS adaptive algorithm is concerned, the requirements for the convergence that the entire filter has to meet can be translated to the joint convergence of N logical (M/N)-length subfilters updated by an N-decimated input signal x N n [29]. Adjusting the above approach to the case of sequential PU LMS, where the length of the autocorrelation matrix is M/N and the sampling rate is divided by N, we deal with K matrices R k N of dimension (M/N) × (M/N) obtained by substituting in Equation (33) the number of coefficients M by M/N.
Thus, the largest eigenvalue λ k , m a x   N k f 0 of each matrix R k N can be expressed as follows.
λ k , m a x N k f 0 = max 1 4 M N ± sin M N 2 π k N f 0 sin 2 π k N f 0
Considering the triangle inequality, the largest eigenvalue λ t o t , m a x   N of the (M/N) × (M/N) matrix R N = k = 1 K C k 2 R k N is bounded by
λ t o t , m a x N k = 1 K C k 2   λ k , m a x N k f 0 = k = 1 K C k 2 max 1 4 M N ± sin M N 2 π k N f 0 sin 2 π k N f 0
It should be noticed that for N = 1 the sequential PU LMS algorithm reduces to the conventional full-update LMS algorithm and Equations (36) and (37) reduce to Equations (34) and (35), respectively.
The quotient between the limits of the step-sizes μ in two different cases (N > 1, sequential PU LMS and N = 1, conventional LMS), defines the step-size gain Gµ as the factor by which one can multiply the step-size meeting convergence requirements [29].
G μ K , f 0 , M , N = b o u n d μ S e q P U L M S b o u n d μ L M S = 2 max λ t o t , m a x N 2 max λ t o t , m a x =           k = 1 K C k 2 λ k , m a x k f 0 k = 1 K C k 2   λ k , m a x N k f 0 = k = 1 K C k 2 max 1 4 M ± sin M 2 π k f 0 sin 2 π k f 0 k = 1 K C k 2 max 1 4 M N ± sin M N 2 π k N f 0 sin 2 π k N f 0 .
The gain in step-size given by Equation (38) depends on the length of the filter M and on the decimation factor N. In order to visualize this double dependence, we set to K = 1 the number of harmonics of the input signal. In so doing, the gain in step-size yields
G μ 1 , f 0 , M , N = b o u n d μ S e q P U _ L M S b o u n d μ L M S = max 1 4 M ± sin M 2 π f 0 sin 2 π f 0 max 1 4 M N ± sin M N 2 π N f 0 sin 2 π N f 0
The step-size gain for a pure tone when different decimation factors N and different filter lengths M are considered, is shown in Figure 2 and Figure 3 [29]. According to Figure 2 and Figure 3, one can infer that the step-size can be increased by a factor up to N if certain frequencies are not present in the input signal. These frequencies are those that exhibit notches in the gain in step-size. The location of these critical frequencies, as well as the width and the number of the notches, can be analyzed as a function of the length of the adaptive filter M, the decimating factor N, and the sampling rate F s . Thus, the step-size parameter can be multiplied by a factor of N in order to ensure that the sequential PU LMS algorithm convergence rate is as fast as the full-update LMS algorithm. To afford that increase in convergence rate, the undesired disturbance must be free of significant components placed at frequency notches in the step-size gain.

3. Results

3.1. Dependence on the Frequency of the Gain in Step-Size for the Periodic LMS Algorithm with Sequential Partial Updates

In order to figure out the dependence of gain in step-size on (a) the frequency, (b) the decimating factor N , and (c) finally, the length of the filter M given by Equation (29), we carried out two experiments described in the upcoming subsubsections.

3.1.1. Gain in Step-Size vs. Frequency for a Fixed Value of the Decimating Factor ( N = 2 ) and Variable Length of the Filter M

In this experiment, we worked with a single-tone discrete-time signal. Digital frequency was set by normalizing analog frequency by a sampling frequency F s = 8000 s a m p l e s s .
x n = A cos 2 π F 0 F s n .
Because we want to deal with periodic signals (one may consider that a discrete-time sinusoidal signal only shows a periodic—in samples—behavior under certain conditions), we have considered a range of values for the period P from 512 down to 8 samples (step 8 samples), corresponding, respectively, to analog frequencies F 0 from 15,6 to 1000 Hz. The length of the filter has been set to M = 40 ,   80 ,   160 ,   240 ,   320 coefficients. The decimating factor of the sequential partial updates strategy has been set to N = 2 .
Thus, from signal x n and its decimated version x d n = x N n we have derived matrices G and G d , respectively from Equations (14) and (24). These matrices are necessary to implement the full updates and the partial-update adaptive strategies described, respectively, in Section 2.1 and Section 2.2. Then, the ratio between the bounds of the step-size given by Equation (29) has been obtained and drawn. In so doing, we have the gain in step-size affordable for every frequency, shown in Figure 4.

3.1.2. Gain in Step-Size vs. Frequency for a Fixed Value of the Length of the Filter ( M = 160 ) Coefficients and Variable Decimating Factor N

In this experiment, whose results appear in Figure 5, we repeat the main idea of the previous subsection but set the length of the filter to M = 160 coefficients, whereas the decimating factor varies as N = 1 ,   2 ,   4 ,   8 .
The signal used is the same single tone given by Equation (40), sampled at F s = 8000 s a m p l e s s . The range of values for period P varies from 512 down to 32 samples (in steps of 8 samples), corresponding, respectively, to analog frequencies F 0 from 15,6 to 250 Hz.
From Figure 4 and Figure 5 one may infer that the gain in step tends asymptotically toward the squared decimating factor N 2 as frequency increases. Moreover, the gain in step-size shown in these figures does not present frequency-dependent notches with the matrix-based formulation of the P-LMS.

3.2. Algorithms Comparison

This section is devoted to a comparison of the learning curves of four different strategies dealing with active noise the control of periodic disturbances:
  • Algorithm 1: standard LMS algorithm
  • Algorithm 2: LMS algorithm with sequential PU, applying, optionally, the gain in step-size.
  • Algorithm 3: Periodic LMS algorithm, based on a matrix formulation.
  • Algorithm 4: Periodic LMS algorithm, based on a matrix formulation. Sequential PU (and gain in step-size) are applied again as in alternative 2.
In Figure 6 is shown the block diagram of the noise control problem addressed with the four adaptive strategies. The sampling frequency is set to 8000 samples/s. We execute 1920 iterations of the adaptive algorithms, and the weights of the filters are reset to zero after 800 iterations so as to evaluate a second convergence process. The step-size chosen for the experiment is set near its maximum value, which ensures convergence.
In the experiment, we deal with two different versions of the desired signal d n . In both cases, referred to in sub-indexes 1 and 2, the desired signal consists of three different harmonics,
d 1 n = 2   cos 2 π 1 8 n + 3 cos 2 π 1 4 n + 4 cos 2 π 3 8 n + a w g n n
d 2 n = 2   cos 2 π 3 16 n + 3 cos 2 π 5 16 n + 4 cos 2 π 7 16 n + a w g n n
where a w g n n is an additive white Gaussian noise that makes d n have a signal to noise ratio of 45 dB.
The reference signal x n is also considered in this example in two different versions,
x 1 n = cos 2 π 1 8 n + cos 2 π 1 4 n + cos 2 π 3 8 n
x 2 n = cos 2 π 3 16 n + cos 2 π 5 16 n + cos 2 π 7 16 n
being in both cases a set of three tones of unitary amplitude.
The two versions of the reference and the desired signals are designed in order to explain the behavior of the adaptive algorithm with regard to the gain in step-size in the second and fourth alternative. When the second algorithm (LMS with Seq. PU) is applied, one may infer from Figure 7 that the gain in step-size can be taken at its full strength G μ = N for the second version of the reference signal, x 2 n (marked with green circles), but not for the first version, x 1 n (marked with red circles) because some/all of its harmonics are located in notches of the gain. More precisely, if the decimating factor N is set to 2, the second harmonic of x 1 n appears in a frequency affected by a notch (around 0.25 normalized Hz). If the decimating factor N is set to 4, the three harmonics of x 1 n appear in frequencies where notches in gain in step-size are present.
Nevertheless, if the fourth algorithm (periodic P-LMS with Seq. PU) is applied, we see from Figure 8 that the gain in step-size does not present notches as long as frequencies are above the very low frequency range. Then, in this fourth adaptive strategy, both versions of the reference signal x 1 n and x 2 n are expected to provide similar results.
In the comparison of the four algorithms presented below (Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14), the performance measure used is the instantaneous squared error in logarithmic scale (dB). All simulation results are obtained by averaging 50 independent runs.
First, we work with x 1 n and d 1 n , that is, the input signals whose frequencies are marked in red in Figure 7 and Figure 8.
In the first example, we compare the fourth adaptive algorithms previously listed, making N = G μ = 1 . As we do not apply either PU or gain in step-size, the second and fourth alternatives are identical, respectively, to the first and third algorithms. In Figure 9 we confirm that the learning curves of the four adaptive strategies are similar in terms of convergence rate.
In the next step, we apply PU by setting a decimating factor N = 2 . If we do not apply gain in step-size to compensate for the inherent reduction in convergence rate, that is, we make G μ = 1 , the second and fourth learning curves should reduce the convergence rate by a factor of 2, as we can confirm in Figure 10.
Finally, we apply PU by setting N = 2 to reduce computational costs, but we also make G μ = 2 to compensate the reduction in convergence rate. As expected (and we can confirm in Figure 11), the second strategy named (b) diverges because the gain in step-size can not be applied at full strength G μ = N due to the notch that appears at the second harmonic of x 1 n (see Figure 7). On the other hand, the fourth strategy (d) works properly because the theoretical limit of the gain in step-size is well above the value used.
Now, let us carry out the second version of the experiment dealing with x 2 n and d 2 n . Here, the input signals contains harmonics at frequencies marked in green in Figure 7 and Figure 8.
In Figure 12, the comparison of the four adaptive algorithms is carried out by making N = G μ = 1 . We obtain results similar to those shown in Figure 9 with the first version of the experiment.
In Figure 13, we show the results achieved with a decimating factor N = 2 and a gain in step-size set to G μ = 1 ; as expected, the second and fourth learning curves reduce the convergence rate by a factor of 2, as happened in Figure 10.
The difference of the two versions of the experiments arises if we apply PU by setting N = 2 and we also make G μ = 2 to compensate for the reduction in convergence rate. Here, both PU strategies—denoted as (b) and (d)—in Figure 14 converge and succeed in the compensation of the inherent reduction of convergence rate due to PU.

4. Discussion

First of all, it is important to note the fact that active noise control of periodic disturbances is a common task that many control systems have to deal with. As already said, narrowband active noise control is a current-day topic of great importance because the attenuation of noises from engines, compressors, turbines, fans, or propellers is a problem worldwide addressed by many researchers.
The gain in step-size is a concept addressed in previous works in the context of the sequential PU LMS algorithm leading to the convergence of a standard FIR filter. The main idea behind this topic is that the inherent reduction of convergence speed due to partial updates can be compensated for by increasing the step-size parameter µ, because the µ bound seems to be increased by a factor of up to the decimating factor N. Nevertheless, the gain in step-size is not a constant of value N, but exhibits notches whose number, width, and location can be predicted and, consequently, avoided.
Then, we have considered the alternative matrix-based approach of the LMS formulation proposed by Parra et al. [9], that we refer to as periodic LMS (P-LMA). In order to reduce its computational complexity by a decimating factor N, sequential PU have been proposed in the framework of the P-LMS algorithm. As expected, PU reduce operations per cycle but, as a drawback, the convergence rate slows down proportionally by N.
On this point, we have looked into the gain in step-size when sequential PU are considered in the P-LMS. We conducted a study of the theoretical framework of the adaptive strategy as well as several experiments to conclude that the gain in step-size does not exhibit notches. The frequency-dependent shape of the gain in step-size of the P-LMS tends to N 2 for high frequencies but shows a lower value in the low frequency range. Nevertheless, there is no evidence of the presence of the notches that were visible when the standard (and not the periodic LMS) was considered.
To sum up, research results show that a reduction in the computational costs associated with PU is not achieved at the expense of a reduction of the convergence rate. This statement is justified by the existence of a gain in step-size, when PU are considered, and, more importantly, thanks to the matrix-based formulation of the LMS algorithm that we have referred to as P-LMS. This gain in step-size does not show notches in the frequency domain as it does in the case of a standard LMS formulation but just a poor behavior in the low frequency range. Thus, paying attention to the low frequency components, one may take full advantage of the application of the step-size gain.

Author Contributions

Conceptualization, P.R.L. and R.M.F.; methodology, P.R.L. and G.P.-N.; software, P.R.L., R.M.F., and F.A.M.; validation, P.R.L. and G.P.-N.; formal analysis, P.R.L. and R.M.F.; investigation, P.R.L. and R.M.F.; resources, P.R.L., R.M.F., and F.A.M.; data curation, P.R.L., R.M.F., and F.A.M.; writing—original draft preparation, P.R.L.; writing—review and editing, P.R.L. and G.P.-N.; visualization, R.M.F. and F.A.M.; supervision, P.R.L. and G.P.-N.; project administration, P.R.L. and G.P.-N.; funding acquisition, P.R.L. and G.P.-N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Widrow, B. Adaptive Filters. In Aspects of Networks and Systems Theory; Holt, Rinehart and Winston: New York, NY, USA, 1970; pp. 563–587. [Google Scholar]
  2. Widrow, B.; McCool, J.M.; Larimore, M.G.; Johnson, C.R. Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. Proc. IEEE 1976, 64, 1151–1162. [Google Scholar] [CrossRef]
  3. Widrow, B.; Stearns, S.D. Adaptive Signal Processing, Prentice-Hall Signal Processing Series; Englewood Cliffs, N.J., Ed.; Prentice-Hall: Hoboken, NJ, USA, 1985; ISBN 978-0-13-004029-9. [Google Scholar]
  4. Sang, E.-F.; Yeh, H.-G. Modified LMS Algorithms for High Speed Underwater Acoustic Signal Processing. In Proceedings of the OCEANS ’93, Victoria, BC, Canada, 18–21 October 1993; Volume 2, pp. II259–II264. [Google Scholar]
  5. Ahmed, Q.Z.; Liu, W.; Yang, L.-L. Least Mean Square Aided Adaptive Detection in Hybrid Direct-Sequence Time-Hopping Ultrawide Bandwidth Systems. In Proceedings of the VTC Spring 2008—IEEE Vehicular Technology Conference, Singapore, 11–14 May 2008; pp. 1062–1066. [Google Scholar]
  6. Shlezinger, N.; Todros, K. Performance Analysis of LMS Filters with Non-Gaussian Cyclostationary Signals. Signal Process. 2019, 154, 260–271. [Google Scholar] [CrossRef] [Green Version]
  7. Bershad, N.J.; Eweda, E.; Bermudez, J.C.M. Stochastic Analysis of the Diffusion LMS Algorithm for Cyclostationary White Gaussian Inputs. Signal Process. 2021, 185, 108081. [Google Scholar] [CrossRef]
  8. Bermudez, J.C.M.; Bershad, N.J.; Eweda, E. Stochastic Analysis of the LMS Algorithm for Cyclostationary Colored Gaussian Inputs. Signal Process. 2019, 160, 127–136. [Google Scholar] [CrossRef]
  9. Parra, I.E.; Hernandez, W.; Fernandez, E. On the Convergence of LMS Filters under Periodic Signals. Digit. Signal Process. 2013, 23, 808–816. [Google Scholar] [CrossRef]
  10. Hernandez, W.; Dominguez, M.E.; Sansigre, G. Analysis of the Error Signal of the LMS Algorithm. IEEE Signal Process. Lett. 2010, 17, 229–232. [Google Scholar] [CrossRef] [Green Version]
  11. Lueg, P. Process of Silencing Sound Oscillations. U.S. Patent US2043416A, 9 June 1936. [Google Scholar]
  12. Widrow, B.; Glover, J.R.; McCool, J.M.; Kaunitz, J.; Williams, C.S.; Hearn, R.H.; Zeidler, J.R.; Dong, E., Jr.; Goodlin, R.C. Adaptive Noise Cancelling: Principles and Applications. Proc. IEEE 1975, 63, 1692–1716. [Google Scholar] [CrossRef]
  13. Warnaka, G.E. Active Attenuation of Noise: The State of the Art. Noise Control. Eng. 1982, 18, 100–110. [Google Scholar] [CrossRef]
  14. Guicking, D. On the Invention of Active Noise Control by Paul Lueg. J. Acoust. Soc. Am. 1998, 87, 2251. [Google Scholar] [CrossRef]
  15. Wang, Z.; Xiao, Y.; Ma, L.; Khorasani, K.; Ma, Y. Multi-Frequency Narrowband Active Noise Control with Online Feedback-Path Modeling Using IIR Adaptive Notch Filters. In Proceedings of the 2019 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Bangkok, Thailand, 11–14 November 2019; pp. 293–296. [Google Scholar]
  16. Zhu, W.; Luo, L.; Sun, J.; Christensen, M.G. A New Variable Step Size Algorithm Based Hybrid Active Noise Control System for Gaussian Noise with Impulsive Interference. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 1072–1076. [Google Scholar]
  17. Kim, D.W.; Lee, M.; Park, P. A Robust Active Noise Control System with Stepsize Scaler in Impulsive Noise Environments. In Proceedings of the 2019 Chinese Control Conference (CCC), Guangzhou, China, 27–30 July 2019; pp. 3358–3362. [Google Scholar]
  18. Yuan, J.; Li, J.; Zhang, A.; Zhang, X.; Ran, J. Active Noise Control System Based on the Improved Equation Error Model. Acoustics 2021, 3, 24. [Google Scholar] [CrossRef]
  19. Jia, Z.; Zheng, X.; Zhou, Q.; Hao, Z.; Qiu, Y. A Hybrid Active Noise Control System for the Attenuation of Road Noise Inside a Vehicle Cabin. Sensors 2020, 20, 7190. [Google Scholar] [CrossRef] [PubMed]
  20. Nelson, P.A.; Nelson, P.A.; Elliott, S.J. Active Control of Sound; Elsevier Science: Amsterdam, The Netherlands, 1992; ISBN 978-0-12-515425-3. [Google Scholar]
  21. Kuo, S.M.; Morgan, D.R. Active Noise Control Systems: Algorithms and DSP Implementations; Wiley Series in Telecommunications and Signal Processing; Wiley: New York, NY, USA, 1996; ISBN 978-0-471-13424-4. [Google Scholar]
  22. Hansen, C.; Snyder, S.; Qiu, X.; Brooks, L.; Moreau, D. Active Control of Noise and Vibration; CRC Press: Boca Raton, FL, USA, 2012; ISBN 978-1-4822-3400-8. [Google Scholar]
  23. Chaplin, G.B.B. Waveform Synthesis—The Essex Solution to Cancelling Periodic Noise and Vibration. J. Acoust. Soc. Am. 1983, 74, S25–S25. [Google Scholar] [CrossRef]
  24. Chaplin, G.B.B.; Smith, R.A.; Bramer, T.P.C. Method and Apparatus for Reducing Repetitive Noise Entering the Ear. J. Acoust. Soc. Am. 1987, 82, 2166. [Google Scholar] [CrossRef]
  25. Jafari, S.; Ioannou, P.; Fitzpatrick, B.; Wang, Y. Robustness and Performance of Adaptive Suppression of Unknown Periodic Disturbances. IEEE Trans. Autom. Control. 2015, 60, 2166–2171. [Google Scholar] [CrossRef]
  26. Lee, H.-S.; Kim, S.-E.; Lee, J.-W.; Song, W.-J. A Variable Step-Size Diffusion LMS Algorithm for Distributed Estimation. IEEE Trans. Signal Process. 2015, 63, 1808–1820. [Google Scholar] [CrossRef]
  27. Turan, C.; Salman, M.S.; Eleyan, A. A Block LMS-Type Algorithm with a Function Controlled Variable Step-Size for Sparse System Identification. In Proceedings of the 2015 57th International Symposium ELMAR (ELMAR), Zadar, Croatia, 28–30 September 2015; pp. 1–4. [Google Scholar]
  28. Turan, C.; Salman, M.S.; Haddad, H. A Transform Domain Sparse LMS-Type Algorithm for Highly Correlated Biomedical Signals in Sparse System Identification. In Proceedings of the 2015 IEEE 35th International Conference on Electronics and Nanotechnology (ELNANO), Kyiv, Ukraine, 22–24 April 2015; pp. 413–416. [Google Scholar]
  29. Ramos, P.; Torrubia, R.; López, A.; Salinas, A.; Masgrau, E. Step Size Bound of the Sequential Partial Update LMS Algorithm with Periodic Input Signals. EURASIP J. Audio Speech Music. Process. 2007, 2007, 1–15. [Google Scholar] [CrossRef] [Green Version]
  30. Douglas, S.C. Adaptive Filters Employing Partial Updates. IEEE Trans. Circuits Syst. II 1997, 44, 209–216. [Google Scholar] [CrossRef]
  31. Godavarti, M.; Hero, A.O. Partial Update LMS Algorithms. IEEE Trans. Signal Process. 2005, 53, 2382–2399. [Google Scholar] [CrossRef] [Green Version]
  32. Ramos Lorente, P.; Martín Ferrer, R.; Arranz Martínez, F.; Palacios-Navarro, G. Modified Filtered-X Hierarchical LMS Algorithm with Sequential Partial Updates for Active Noise Control. Appl. Sci. 2020, 11, 344. [Google Scholar] [CrossRef]
  33. Kuo, S.M.; Tahernehadi, M. Wenge Hao Convergence Analysis of Narrow-Band Active Noise Control System. IEEE Trans. Circuits Syst. II Analog. Digit. Signal Process. 1999, 46, 220–223. [Google Scholar] [CrossRef]
  34. Haykin, S.S. Adaptive Filter Theory, 5th ed.; Pearson: Upper Saddle River, NJ, USA, 2014; ISBN 978-0-13-267145-3. [Google Scholar]
Figure 1. Scheme of the Seq. PU adaptive algorithm; here, we point out the coefficients to be updated at every cycle as well as the related samples of the regressor signal.
Figure 1. Scheme of the Seq. PU adaptive algorithm; here, we point out the coefficients to be updated at every cycle as well as the related samples of the regressor signal.
Applsci 11 05618 g001
Figure 2. Gain in step-size for decimation factors, N = 1, 2, 4, and 8 for a single tone. The number of coefficients of the filter is set to M = 256.
Figure 2. Gain in step-size for decimation factors, N = 1, 2, 4, and 8 for a single tone. The number of coefficients of the filter is set to M = 256.
Applsci 11 05618 g002
Figure 3. Gain in step-size with filter lengths, M = 8, 32, and 128 for a single tone. The decimating factor is set to N = 2.
Figure 3. Gain in step-size with filter lengths, M = 8, 32, and 128 for a single tone. The decimating factor is set to N = 2.
Applsci 11 05618 g003
Figure 4. Gain in step-size for a single tone when using the matrix-based version (from Parra et al.) of the periodic LMS algorithm with seq. partial updates. Decimation factor is set to N = 2. The length of the filter M varies from 40 to 320 taps.
Figure 4. Gain in step-size for a single tone when using the matrix-based version (from Parra et al.) of the periodic LMS algorithm with seq. partial updates. Decimation factor is set to N = 2. The length of the filter M varies from 40 to 320 taps.
Applsci 11 05618 g004
Figure 5. Gain in step-size for a single tone when using the matrix-based version (from Parra et al.) of the periodic LMS algorithm with seq. partial updates. The length of the filter is set to 160 taps. The decimation factor takes values 1, 2, 4 and 8.
Figure 5. Gain in step-size for a single tone when using the matrix-based version (from Parra et al.) of the periodic LMS algorithm with seq. partial updates. The length of the filter is set to 160 taps. The decimation factor takes values 1, 2, 4 and 8.
Applsci 11 05618 g005
Figure 6. Block diagram of the noise attenuation experiment used to compare different adaptive algorithms.
Figure 6. Block diagram of the noise attenuation experiment used to compare different adaptive algorithms.
Applsci 11 05618 g006
Figure 7. Gain in step-size for a single tone when using sequential PU with LMS algorithm. The length of the filter is set to 64 taps. The decimation factor takes values 2 and 4. Circles mark frequencies where reference and desired signals have significant harmonics; red circles mark components of x 1 n and green circles mark components of x 2 n .
Figure 7. Gain in step-size for a single tone when using sequential PU with LMS algorithm. The length of the filter is set to 64 taps. The decimation factor takes values 2 and 4. Circles mark frequencies where reference and desired signals have significant harmonics; red circles mark components of x 1 n and green circles mark components of x 2 n .
Applsci 11 05618 g007
Figure 8. Gain in step-size for a single tone when using sequential PU with periodic LMS algorithm (matrix-based). The filter length is set to 64 taps. The decimation factor takes values 2 and 4. Apart from the very low frequency range, the step-size gain can be approximated by N 2 . Circles mark frequencies where reference and desired signals have significant harmonics; red circles mark components of x 1 n and green circles mark components of x 2 n .
Figure 8. Gain in step-size for a single tone when using sequential PU with periodic LMS algorithm (matrix-based). The filter length is set to 64 taps. The decimation factor takes values 2 and 4. Apart from the very low frequency range, the step-size gain can be approximated by N 2 . Circles mark frequencies where reference and desired signals have significant harmonics; red circles mark components of x 1 n and green circles mark components of x 2 n .
Applsci 11 05618 g008
Figure 9. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 1   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 1   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 9. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 1   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 1   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g009
Figure 10. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 10. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g010
Figure 11. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 2 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 2 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 11. Learning curves comparison. Version 1 of the experiment: desired signal d 1 n and reference signal x 1 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 2 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 2 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g011
Figure 12. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 1   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 1   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 12. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 1   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 1   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g012
Figure 13. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 13. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 1 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 1 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g013
Figure 14. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 2 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 2 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Figure 14. Learning curves comparison. Version 2 of the experiment: desired signal d 2 n and reference signal x 2 n . (a) LMS, (b) LMS with Seq PU; N = 2   and   G μ = 2 , (c) P-LMS, (d) P-LMS with Seq. PU; N = 2   and   G μ = 2 . Average of 50 runs. The performance measure used is the instantaneous squared error in logarithmic scale (dB).
Applsci 11 05618 g014
Table 1. Logical M/N-length subfilter updated at every iteration.
Table 1. Logical M/N-length subfilter updated at every iteration.
IterationIndexCoefficients from w to be Updated
11 w d = w 1 , w N + 1 , w 2 N + 1 , , w M m
22 w d = w 2 , w N + 2 , w 2 N + 2 , , w M m + 1
33 w d = w 3 , w N + 3 , w 2 N + 3 , , w M m + 2
N − 1N − 1 w d = w N 1 , w 2 N 1 , w 3 N 1 , , w M m + N 2
NN w d = w N , w 2 N , w 3 N , , w M m + N 1
N + 1
1
w d = w 1 , w N + 1 , w 2 N + 1 , , w M m
2N − 1N − 1 w d = w N 1 , w 2 N 1 , w 3 N 1 , , w M m + N 2
2NN w d = w N , w 2 N , w 3 N , , w M m + N 1
2N + 11 w d = w 1 , w N + 1 , w 2 N + 1 , , w M m
kN − 1N − 1 w d = w N 1 , w 2 N 1 , w 3 N 1 , , w M m + N 2
kNN w d = w N , w 2 N , w 3 N , , w M m + N 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ramos Lorente, P.; Martín Ferrer, R.; Arranz Martínez, F.; Palacios-Navarro, G. An Alternative Approach to Obtain a New Gain in Step-Size of LMS Filters Dealing with Periodic Signals. Appl. Sci. 2021, 11, 5618. https://doi.org/10.3390/app11125618

AMA Style

Ramos Lorente P, Martín Ferrer R, Arranz Martínez F, Palacios-Navarro G. An Alternative Approach to Obtain a New Gain in Step-Size of LMS Filters Dealing with Periodic Signals. Applied Sciences. 2021; 11(12):5618. https://doi.org/10.3390/app11125618

Chicago/Turabian Style

Ramos Lorente, Pedro, Raúl Martín Ferrer, Fernando Arranz Martínez, and Guillermo Palacios-Navarro. 2021. "An Alternative Approach to Obtain a New Gain in Step-Size of LMS Filters Dealing with Periodic Signals" Applied Sciences 11, no. 12: 5618. https://doi.org/10.3390/app11125618

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop