Next Article in Journal
A Fixed-Point Position Observation Algorithm and System-on-Chip Design Suitable for Sensorless Control of High-Speed Permanent Magnet Synchronous Motor
Next Article in Special Issue
Design of a Four-Wheel Steering Mobile Robot Platform and Adaptive Steering Control for Manual Operation
Previous Article in Journal
STN-GCN: Spatial and Temporal Normalization Graph Convolutional Neural Networks for Traffic Flow Forecasting
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

A Novel NLMS Algorithm for System Identification

1
Department of Automobile and IT Convergence, Kookmin University, Seoul 02707, Republic of Korea
2
Department of IT Convergence Engineering, Kumoh National Institute of Technology, 61 Daehak-ro (Yangho-dong), Gumi 39177, Republic of Korea
3
Department of Electronic Engineering, Kumoh National Institute of Technology, 61 Daehak-ro (Yangho-dong), Gumi 39177, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(14), 3159; https://doi.org/10.3390/electronics12143159
Submission received: 16 June 2023 / Revised: 17 July 2023 / Accepted: 19 July 2023 / Published: 20 July 2023
(This article belongs to the Special Issue Intelligence Control and Applications of Intelligence Robotics)

Abstract

:
In this paper, we propose a novel normalized least mean squares (NLMS) algorithm for system identification applications. Our approach involves analyzing the mean squared deviation performance of the NLMS algorithm using a random walk model to select two optimal parameters, the step size and regularization parameters, for the rapid convergence of the colored input signals. We verified that the proposed algorithm exhibited faster convergence than existing algorithms, even in scenarios of sudden system changes.

1. Introduction

The applications of system identification techniques in robotics are wide-ranging and diverse. They encompass areas such as robot localization and mapping, sensor calibration, motion planning, and control system design [1,2,3,4]. System identification enables robots to learn and adapt to their environment, allowing them to adjust their behaviors and responses autonomously based on observed data.
Normalized least mean squares (NLMS) algorithms are widely employed in system identification applications because of their robustness, computational simplicity, and ease of implementation [5,6]. For NLMS algorithms, the convergence speed and steady-state error performance are most important. Faster convergence enables adaptive filters to adapt quickly to changes in the input signals or system conditions. Additionally, smaller steady-state errors ensure the accurate estimation or overall reconstruction of the desired signals. The performance of NLMS algorithms is directly affected by two key parameters: the step size and regularization parameters. The appropriate adjustment of these parameters is crucial for optimizing the performance of an algorithm. Several algorithms have been proposed to address this challenge, such as the variable step size (VSS) and variable regularization (VR) algorithms. These algorithms aim to enhance the convergence behaviors and steady-state misalignment [7,8,9,10,11,12,13,14]. Because many of these algorithms were developed under the assumption of a time-invariant system to simplify the algorithm design, they cannot perform optimally when the system undergoes significant changes.
To address this limitation, researchers developed the joint-optimization NLMS (JO-NLMS) algorithm using a time-varying system model [15]. This algorithm controlled the step size and regularization parameter to minimize system misalignment using a joint-optimization scheme. However, the JO-NLMS algorithm only considered white Gaussian input signals, which could lead to performance degradation when faced with colored input signals.
This paper proposes a novel NLMS algorithm for the fast estimation of unknown system coefficients with colored input signals. In the application of system identification, the primary aim is to estimate unknown system (or plant) coefficients rather than solely minimizing the errors. The mean square deviation (MSD) performance of the NLMS algorithm could provide guidelines for selecting the optimal parameters, such as the step size and regularization parameters. Therefore, we analyzed the MSD of the conventional NLMS algorithm with a fixed step size and regularization parameter. Additionally, the optimal parameters were derived by minimizing the MSD of the NLMS at each iteration.
The remainder of this paper is organized as follows. Section 2 reviews the conventional NLMS algorithm. Section 3 presents an MSD analysis of the NLMS algorithm using the random walk model. Section 4 presents the optimal parameters derived by the MSD analysis. Section 5 presents the computer simulation results to verify the convergence and misalignment performance of the proposed algorithm for white and colored inputs. Lastly, Section 6 concludes the paper.

2. Review of the Conventional NLMS Algorithm

We focused on the adaptive identification of a linear adaptive filter that was assumed to be a finite impulse response filter for the system identification problem based on given input–output data. Figure 1 shows the structure of an adaptive filter for system identification.
In the system identificatioin problem, the unknown system coefficients and the input vector at the discrete-time index n are given by
w o = [ w 0 , w 1 , , w M 1 ] T ,
u ( n ) = [ u ( n ) , u ( n 1 ) , , u ( n M + 1 ) ] T ,
respectively, where M is the filter length and ( · ) T is the transpose operation. u ( n ) is a random input sequence and the vector u ( n ) consists of a total of M input samples, from the current input sample to the past M 1 input samples. The desired signal d ( n ) is
d ( n ) = u ( n ) T w o + v ( n ) ,
where v ( n ) is the measurement noise, which follows a zero-mean white Gaussian distribution with variance σ v 2 . The standard NLMS algorithm estimates w o at index n, defined by w ^ ( n ) , using the updated equation
w ^ ( n ) = w ^ ( n 1 ) + μ u ( n ) e ( n ) u T ( n ) u ( n ) + δ ,
where μ is the step size, δ is the regularization parameter, and e ( n ) = d ( n ) u T ( n ) w ^ ( n 1 ) is the output error signal.

3. MSD Analysis of the NLMS Algorithm Using the Random Walk Model

We considered that the unknown system vector w o ( n ) at index n had the following model [15,16,17]:
w o ( n ) = w o ( n 1 ) + q ( n ) ,
where q ( n ) = [ q ( n ) , q ( n 1 ) , , q ( n M + 1 ) ] T is a zero-mean white Gaussian noise vector with variance σ q 2 . The covariance matrix of q ( n ) was assumed to be E q ( n ) q T ( n ) = σ q 2 I M , where E ( · ) is the mathematical expectation, and I M is the M × M identity matrix. By subtracting the random walk model, the error signal could be obtained as
e ( n ) = d ( n ) u T ( n ) w ^ ( n 1 ) = u T ( n ) w o ( n ) + v ( n ) u T ( n ) w ^ ( n 1 ) = u T ( n ) w o ( n 1 ) + u T ( n ) q ( n ) + v ( n ) u T ( n ) w ^ ( n 1 ) = u T ( n ) w ˜ ( n 1 ) + u T ( n ) q ( n ) + v ( n ) ,
where w ˜ ( n ) w o ( n ) w ^ ( n ) . Therefore, we could obtain the recursion of w ˜ ( n ) by applying (4) and (6) as follows:
w ˜ ( n ) = w ˜ ( n 1 ) μ u ( n ) e ( n ) u T ( n ) u ( n ) + δ + q ( n ) = I M μ u ( n ) u T ( n ) u T ( n ) u ( n ) + δ w ˜ ( n 1 ) μ u ( n ) v ( n ) u T ( n ) u ( n ) + δ μ u ( n ) u T ( n ) q ( n ) u T ( n ) u ( n ) + δ + q ( n ) .
We can define the covariance matrix of w ˜ ( n ) using a conditional covariance matrix based on the given set of input signals U n { u ( j ) | 0 j n } as follows [18]:
P ( n ) E w ˜ ( n ) w ˜ T ( n ) E E w ˜ ( n ) w ˜ T ( n ) | U n ,
P ¯ ( n ) E w ˜ ( n ) w ˜ T ( n ) | U n .
Hence, the MSD is defined as
MSD ( n ) E w ˜ T ( n ) w ˜ ( n ) T r P ( n ) .
To obtain the MSD of the NLMS algorithm at index n, we considered the following three assumptions [17,19,20,21,22,23]:
Assumption 1.
The signals v ( n ) , u ( n ) , q ( n ) , and w ˜ ( n 1 ) are statistically independent.
Assumption 2.
The input signal vector u ( n ) has a zero mean and an independent and identically distributed and covariance matrix
R = E u ( n ) u T ( n ) = V Λ V T ,
where Λ = d i a g ( λ 1 , λ 2 , , λ M ) are the eigenvalues of covariance matrix R , and V = ν 1 ν 2 ν M are the corresponding orthonormal eigenvectors ( V T V = I M ) .
Assumption 3.
The input signal vector u ( n ) comprises three independent variables s , r , V that are independent and identically distributed, such that
u ( n ) = s r V , where P ( s = ± 1 ) = 1 2 r u ( n ) P ( V = ν j ) = p j = λ j T r ( R ) , j = 1 , , M
Here, r u ( n ) indicates that r and the norm of u ( n ) have the same distribution. We note that j = 1 M p j = 1 .
Under Assumption 1, the conditional covariance matrix of w ˜ ( n ) was propagated as follows:
P ¯ ( n ) = I M μ u ( n ) u T ( n ) u T ( n ) u ( n ) + δ P ¯ ( n 1 ) I M μ u ( n ) u T ( n ) u T ( n ) u ( n ) + δ T + μ 2 u ( n ) E v ( n ) 2 u T ( n ) u T ( n ) u ( n ) + δ 2 + μ 2 u ( n ) u T ( n ) E q ( n ) q T ( n ) u ( n ) u T ( n ) u T ( n ) u ( n ) + δ 2 μ u ( n ) u T ( n ) E q ( n ) q T ( n ) u T ( n ) u ( n ) + δ μ E q ( n ) q T ( n ) u ( n ) u T ( n ) u T ( n ) u ( n ) + δ + E q ( n ) q T ( n ) = I M μ u ( n ) u T ( n ) u T ( n ) u ( n ) + δ P ¯ ( n 1 ) I M μ u ( n ) u T ( n ) u T ( n ) u ( n ) + δ T + μ 2 σ v 2 u ( n ) u T ( n ) u T ( n ) u ( n ) + δ 2 + μ 2 σ q 2 u ( n ) u T ( n ) u ( n ) u T ( n ) u T ( n ) u ( n ) + δ 2 2 μ σ q 2 u ( n ) u T ( n ) u T ( n ) u ( n ) + δ + σ q 2 I M .
Therefore, applying Assumptions 2 and 3, the propagation of P ( n ) could be obtained as
P ( n ) = E I M μ r 2 V V T r 2 + δ P ( n 1 ) I M μ r 2 V V T r 2 + δ T + E μ 2 σ v 2 r 2 V V T ( r 2 + δ ) 2 + E μ 2 σ q 2 r 4 V V T V V T ( r 2 + δ ) 2 E 2 μ σ q 2 r 4 V V T r 2 + δ + σ q 2 I M .
We defined π k ( n ) ν k T P ( n ) ν k by multiplying ν k T and ν k for both sides of P ( n ) , where k = 1 , 2 , , M . From (14), we obtained the recursion of π k ( n ) as follows:
π k ( n ) = E ν k T I M μ r 2 V V T r 2 + δ P ( n 1 ) I M μ r 2 V V T r 2 + δ T ν k + E μ 2 σ v 2 r 2 ν k T V V T ν k ( r 2 + δ ) 2 + E μ 2 σ q 2 r 4 ν k T V V T V V T ν k ( r 2 + δ ) 2 E 2 μ σ q 2 r 4 ν k T V V T ν k r 2 + δ + σ q 2 ν k T ν k = j = 1 M p j E ν k T I M μ r 2 ν j ν j T r 2 + δ P ( n 1 ) I M μ r 2 ν j ν j T r 2 + δ T ν k + j = 1 M p j E μ 2 σ v 2 r 2 ν k T ν j ν j T ν k ( r 2 + δ ) 2 + j = 1 M p j E μ 2 σ q 2 r 4 ν k T ν j ν j T ν j ν j T ν k ( r 2 + δ ) 2 j = 1 M p j E 2 μ σ q 2 r 4 ν k T ν j ν j T ν k r 2 + δ + j = 1 M p j σ q 2 = 1 + p k α r 2 α r 2 2 π k ( n 1 ) + σ q 2 + p k σ v 2 α 2 r 2 ,
where α μ / ( r 2 + δ ) . To calculate the MSD from π k ( n ) , we could evaluate the following equation:
T r P ( n ) = T r V T P ( n ) V = k = 1 M π k ( n ) .
If Equation (15) was accumulated diagonally, we obtained
Π ( n ) = I M + α r 2 α r 2 2 T r ( R ) Λ Π ( n 1 ) + σ q 2 I M + σ v 2 α 2 r 2 T r ( R ) Λ ,
where
Π ( n ) = π 1 ( n ) π M ( n ) .
Finally, we obtained
T r P ( n ) = T r P ( n 1 ) + α r 2 α r 2 2 T r ( R ) T r Λ Π ( n 1 ) + σ q 2 α r 2 α r 2 2 T r ( R ) T r Λ + M σ q 2 + σ v 2 α 2 r 2 T r ( R ) T r ( Λ ) = 1 + α r 2 α r 2 2 M ¯ T r P ( n 1 ) + 1 + α r 2 α r 2 2 M M σ q 2 + σ v 2 α 2 r 2 ,
where M ¯ M is the virtual filter length that considers colored input signals [11,12]. When the input signal is a white Gaussian input, the matrix Λ T r ( R ) is approximately equal to 1 M I M . However, for colored input signals, Λ T r ( R ) 1 M I M .

4. Proposed NLMS Algorithm

The optimal parameters of the NLMS algorithm that most rapidly minimized the MSD could be obtained using
MSD ( n ) μ ( n ) = MSD ( n ) α ( n ) × α ( n ) μ ( n ) = 0 ,
MSD ( n ) δ ( n ) = MSD ( n ) α ( n ) × α ( n ) δ ( n ) = 0 ,
which is known as the joint-optimization approach [15,17,24]. Both Equations (20) and (21) produced the same result, as follows:
α * ( n ) μ * ( n ) r 2 + δ * ( n ) = MSD ( n 1 ) + M ¯ σ q 2 r 2 MSD ( n 1 ) + M ¯ σ q 2 + M ¯ σ v 2 ,
where μ * ( n ) and δ * ( n ) represent the optimal values for the step size and regularization parameter, respectively. From the two optimal parameters, μ * ( n ) and δ * ( n ) , the novel NLMS algorithm was derived as follows:
w ^ ( n ) = w ^ ( n 1 ) + α * ( n ) u ( n ) e ( n ) ,
where
α * ( n ) g ( n ) r 2 g ( n ) + M ¯ σ v 2 ,
g ( n ) MSD ( n 1 ) + M ¯ σ q 2 ,
MSD ( n ) = 1 α * ( n ) r 2 M ¯ g ( n ) + σ q 2 M M ¯ .

4.1. Steady-State Value of the Proposed NLMS Algorithms

Substituting (24) and (25) into (26), the recursion of MSD was obtained as
MSD ( n ) = 1 r 2 MSD ( n 1 ) + r 2 M ¯ σ q 2 r 2 M ¯ MSD ( n 1 ) + r 2 M ¯ 2 σ q 2 + M ¯ 2 σ v 2 MSD ( n 1 ) + M ¯ σ q 2 + σ q 2 M M ¯ .
when the recursion of MSD was stable, lim n MSD ( n ) = lim n MSD ( n 1 ) = MSD ( ) . From (27), therefore, we obtained the quadratic equation on MSD ( ) as follows:
r 2 MSD ( ) + M ¯ σ q 2 2 r 2 M M ¯ σ q 2 MSD ( ) + M ¯ σ q 2 r 2 M M ¯ 2 σ q 2 σ v 2 = 0 .
Through the solution of the quadratic equation, the steady-state MSD of the proposed NLMS algorithm could be obtained as
MSD ( ) = M ¯ σ q 2 2 ( M 2 ) + M 1 + 4 σ v 2 r 2 M σ q 2 .
As can be seen from Equation (29), it was evident that both σ q 2 and σ v 2 had a significant impact on the steady state. These findings were consistent with the results obtained from the JO-NLMS algorithm, indicating a strong similarity between the two algorithms.

4.2. Practical Consideration

The proposed NLMS algorithm requires three primary parameters: r 2 , σ v 2 , and σ q 2 . To obtain their values, we had to either predetermine or estimate these parameters. The first parameter, r 2 , could be easily estimated utilizing the variance of the input signal when M 1 as follows:
r 2 M σ ^ u 2 ( n ) = u T ( n ) u ( n ) .
The second parameter is the variance in the measurement noise σ v 2 , which is a crucial parameter that significantly impacted the performance of the proposed algorithm. This parameter could be readily measured or estimated. For example, in the case of speech signals, the measurement noise variance could be estimated during an initial silence period, or real-time methods for estimating the noise variance have also been proposed [7,9,25,26]. While various methods exist for estimating this parameter, a discussion of specific estimation algorithms exceeds the scope of our study. Therefore, we made the assumption that the value of the measurement noise variance was available a priori for the purpose of our algorithm analysis.
The third parameter σ q 2 could be estimated as follows [15,24]:
σ ^ q 2 ( n ) w ^ ( n ) w ^ ( n 1 ) 2 M .
The proposed NLMS algorithm is summarized in Algorithm 1.
Algorithm 1 Proposesd NLMS algorithm summary.
Initialization:
    w ^ ( 0 ) = 0
    MSD ( 0 ) = 1
    σ ^ q 2 ( 0 ) = 0
Parameters:
    σ v 2 , known or estimated
    M ¯ = κ × M , κ = 1 (for white inputs) or κ > 1 (for colored inputs)
For each index n:
    e ( n ) = d ( n ) u T ( n ) w ^ ( n 1 )
    g ( n ) = MSD ( n 1 ) + M ¯ σ q 2 ( n 1 )
    M σ ^ u 2 ( n ) = u T ( n ) u ( n )
    α * ( n ) = g ( n ) r 2 g ( n ) + M ¯ σ v 2
    x ( n ) = α * ( n ) u ( n ) e ( n )
    w ^ ( n ) = w ^ ( n 1 ) + x ( n )
    MSD ( n ) = 1 α * ( n ) M σ ^ u 2 ( n ) M ¯ g ( n ) + σ ^ q 2 M M ¯
    σ ^ q 2 ( n ) = x ( n ) T x ( n ) M

5. Simulation Results

We conducted computer simulations of a system identification problem to assess the MSD behaviors of various algorithms. In our experiments, we employed two types of unknown system coefficients w o . To assess the general system identification performance of the white and colored inputs, we used randomly generated unknown system coefficients with unit variance. Additionally, we used the acoustic impulse response of a room to compare the performance of the acoustic echo cancellation for speech inputs. The adaptive filter and unknown system were assumed to have the same filter length, which was set to M = 64 or 512 during the simulations. To generate the colored input signals, we filtered white Gaussian noise using two transfer functions:
G 1 ( z ) = 1 1 0.95 z 1 ,
G 2 ( z ) = 1 1 1.6 z 1 + 0.81 z 2 ,
referred to as AR 1 and AR 2 , respectively. The signal-to-noise ratio (SNR) was set to 20 or 30 dB, and measurement noise was added to the output signal y ( n ) . The SNR was defined as
SNR 10 log 10 E u ( n ) T w o 2 E v ( n ) 2 .
Furthermore, we assumed knowledge of the noise variance for all algorithms. The performance metric used was the normalized mean squared deviation (NMSD), which is defined as
NMSD ( n ) 10 log 10 E w ˜ T ( n ) w ˜ ( n ) w o T ( n ) w o ( n ) .
The simulation results were averaged over 30 independent trials.

5.1. Performance Comparison with White and Colored Input Signals

Figure 2 shows the NMSD learning curves of standard NLMS algorithms with various step sizes (the joint-optimizatioin step size and regularization parameter nomalized sub-band adaptive filter (JOSR-NSAF) algorithm [24], the JO-NLMS algorithm, and the proposed algorithm) for white input signals, which were generated in a zero-mean Gaussian random sequence.
The standard NLMS algorithm was used as a baseline for comparison with the proposed algorithm. The step size of the NLMS algorithm was set to 1 and 0.01, respectively. This was carried out because the NLMS algorithm converged the fastest when the step size was 1, while the steady-state error was similar to that of other algorithms when the step size was 0.01 in this simulation environment. The virtual filter length M ¯ = κ M was set to κ = 1 for the proposed algorithm.
The experimental results indicated that the NMSD performances of the proposed NLMS algorithm, JOSR-NSAF, and JO-NLMS were very similar. The proposed NLMS algorithm for white inputs was slightly slower than the standard NLMS algorithm with a step size of 0.01. However, it was confirmed that it had a sufficiently small error within a short period of time.
Although JOSR-NSAF, JO-NLMS, and the proposed algorithm presented similar performance, the computational complexity of each algorithm was different. In particular, JOSR-NSAF required additional operations for N filter banks with filter length L to use N sub-bands in order to reduce the dynamic range of the input signal. Therefore, it required more computation than the NLMS-based algorithms. Table 1 shows the computational complexity of JO-NLMS, JOSR-NSAF, and the proposed algorithm per iteration.
Figure 3 shows the NMSD learning curves of the standard NLMS algorithms with various step sizes ( μ = 1 and μ = 0.01 ), JOSR-NSAF, JO-NLMS, and the proposed NLMS for the colored input signals generated by AR 1 . The virtual filter length M ¯ = κ M was set to κ = 2 for the proposed algorithm. Compared with the JO-NLMS algorithm, the proposed algorithm exhibited a slightly higher steady-state error performance but had a faster convergence in terms of MSD performance for AR 1 input signals.
Figure 4 shows the NMSD learning curves of standard NLMS algorithms with various step sizes ( μ = 1 and μ = 0.04 )—JOSR-NSAF, JO-NLMS, and the proposed NLMS—for the colored input signals generated by AR 2 . The virtual filter length M ¯ = κ M was set to κ = 20 for the proposed algorithm. Similar to the results for AR 1 input signals, the proposed algorithm showed fast convergence in terms of MSD performance for AR 2 input signals.
Figure 5 shows the optimal parameter values selected for JO-NLMS and the proposed NLMS algorithm for AR 2 inputs. The optimal parameter values for each algorithm showed that the parameter value of JO-NLMS decreased rapidly, even though the MSD value was not sufficiently small. However, the optimal parameter value of the proposed algorithm did not decrease as rapidly as that of JO-NLMS due to the value of M ¯ , and therefore it could maintain a fast convergence speed.

5.2. Performance Comparison with Speech Input Signals

We conducted experiments using real speech signals and acoustic echo paths to verify the system identification performance of the algorithms for speech signals. The acoustic echo paths and speech signals used are shown in Figure 6. The acoustic echo path was of length M = 512 .
Figure 7 shows the NMSD learning curves of standard NLMS algorithms with various step sizes ( μ = 1 and μ = 0.01 ), JOSR-NSAF, JO-NLMS, and the proposed NLMS for speech signals. The virtual filter length M ¯ = κ M was set to κ = 20 for the proposed algorithm. The experimental results confirmed that the proposed algorithm outperformed the JO-NLMS algorithm in terms of convergence speed and steady-state error for speech inputs and acoustic echo paths.

6. Conclusions

In this paper, we proposed a new NLMS algorithm for system identification that exhibited a high convergence speed and low steady-state error, even for colored inputs. To achieve this, we utilized an method for interpreting the MSD performance of the NLMS algorithm using a random walk model to select the optimal step size and regularization parameters. Specifically, we employed a virtual filter length to achieve fast convergence for colored inputs. Additionally, we experimentally demonstrated that our algorithm performed well even without requiring the additional reset algorithms commonly used in existing VSS and VR algorithms to adapt to significant system changes. The simulations revealed that the proposed algorithm for system identification achieved fast convergence even in scenarios of sudden system changes.

Author Contributions

Conceptualization and formal analysis, J.Y.; investigation and validation, B.Y.P.; methodology and software, W.I.L.; software and writing, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2020R1I1A3071734). This research was also supported by the MSIT (Ministry of Science and ICT), Korea, under the the Grand Information Technology Research Center Support Program (IITP-2023-2020-0-01612) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, G.; Zhang, P.; Ma, G.; Xiao, W. System identification of the nonlinear residual errors of an industrial robot using massive measurements. Robot. Comput.-Integr. Manuf. 2019, 59, 104–114. [Google Scholar] [CrossRef]
  2. Akanyeti, O.; Nehmzow, U.; Billings, S.A. Robot training using system identification. Robot. Auton. Syst. 2008, 56, 1027–1041. [Google Scholar] [CrossRef] [Green Version]
  3. Bähnemann, R.; Burri, M.; Galceran, E.; Siegwart, R.; Nieto, J. Sampling-based motion planning for active multirotor system identification. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3931–3938. [Google Scholar]
  4. Worley, R.; Yu, Y.; Anderson, S. Acoustic echo-localization for pipe inspection robots. In Proceedings of the 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Karlsruhe, Germany, 14–16 September 2020; pp. 160–165. [Google Scholar]
  5. Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  6. Sayed, A.H. Fundamentals of Adaptive Filtering; Wiley: New York, NJ, USA, 2003. [Google Scholar]
  7. Paleologu, C.; Ciochina, S.; Benesty, J. Variable step-size NLMS algorithm for under-modeling acoustic echo cancellation. IEEE Signal Process. Lett. 2008, 15, 5–8. [Google Scholar] [CrossRef]
  8. Huang, H.C.; Lee, J. A new variable step-size NLMS algorithm and its performance analysis. IEEE Trans. Signal Process. 2011, 60, 2055–2060. [Google Scholar] [CrossRef]
  9. Benesty, J.; Rey, H.; Vega, L.R.; Tressens, S. A nonparametric vss nlms algorithm. IEEE Signal Process. Lett. 2006, 13, 581–584. [Google Scholar] [CrossRef]
  10. Song, I.; Park, P. A normalized least-mean-square algorithm based on variable-step-size recursion with innovative input data. IEEE Signal Process. Lett. 2012, 19, 817–820. [Google Scholar] [CrossRef]
  11. Lee, M.; Park, T.; Park, P. Variable Step-Size 0-Norm Constraint NLMS Algorithms Based on Novel Mean Square Deviation Analyses. IEEE Trans. Signal Process. 2022, 70, 5926–5939. [Google Scholar] [CrossRef]
  12. Park, P.; Chang, M.; Kong, N. Scheduled-stepsize NLMS algorithm. IEEE Signal Process. Lett. 2009, 16, 1055–1058. [Google Scholar] [CrossRef]
  13. Rey, H.; Vega, L.R.; Tressens, S.; Benesty, J. Variable explicit regularization in affine projection algorithm: Robustness issues and optimal choice. IEEE Trans. Signal Process. 2007, 55, 2096–2109. [Google Scholar] [CrossRef]
  14. Choi, Y.S.; Shin, H.C.; Song, W.J. Robust regularization for normalized LMS algorithms. IEEE Trans. Circuits Syst. II Express Briefs 2006, 53, 627–631. [Google Scholar] [CrossRef]
  15. Ciochină, S.; Paleologu, C.; Benesty, J. An optimized NLMS algorithm for system identification. Signal Process. 2016, 118, 115–121. [Google Scholar] [CrossRef]
  16. Lee, C.H.; Park, P. Scheduled-step-size affine projection algorithm. IEEE Trans. Circuits Syst. I Regul. Pap. 2012, 59, 2034–2043. [Google Scholar] [CrossRef]
  17. Shin, J.; Park, B.Y.; Lee, W.I.; Yoo, J.; Cho, J. A novel normalized subband adaptive filter algorithm based on the joint-optimization scheme. IEEE Access 2022, 10, 9868–9876. [Google Scholar] [CrossRef]
  18. Park, P.; Lee, C.H.; Ko, J.W. Mean-square deviation analysis of affine projection algorithm. IEEE Trans. Signal Process. 2011, 59, 5789–5799. [Google Scholar] [CrossRef]
  19. Shin, J.; Yoo, J.; Park, P. Adaptive regularisation for normalised subband adaptive filter: Mean-square performance analysis approach. IET Signal Process. 2018, 12, 1146–1153. [Google Scholar] [CrossRef]
  20. Yin, W.; Mehr, A.S. Stochastic analysis of the normalized subband adaptive filter algorithm. IEEE Trans. Circuits Syst. I Regul. Pap. 2010, 58, 1020–1033. [Google Scholar] [CrossRef]
  21. Slock, D.T. On the convergence behavior of the LMS and the normalized LMS algorithms. IEEE Trans. Signal Process. 1993, 41, 2811–2825. [Google Scholar] [CrossRef]
  22. Sankaran, S.G.; Beex, A.L. Convergence behavior of affine projection algorithms. IEEE Trans. Signal Process. 2000, 48, 1086–1096. [Google Scholar] [CrossRef] [Green Version]
  23. Zhi, Y.; Yang, Y.; Zheng, X.; Zhang, J.; Wang, Z. Statistical tracking behavior of affine projection algorithm for unity step size. Appl. Math. Comput. 2016, 283, 22–28. [Google Scholar] [CrossRef]
  24. Yu, Y.; Zhao, H. A joint-optimization NSAF algorithm based on the first-order Markov model. Signal Image Video Process. 2017, 11, 509–516. [Google Scholar] [CrossRef] [Green Version]
  25. Ni, J.; Li, F. A variable step-size matrix normalized subband adaptive filter. IEEE Trans. Audio Speech Lang. Process. 2009, 18, 1290–1299. [Google Scholar]
  26. Iqbal, M.A.; Grant, S.L. Novel variable step size NLMS algorithms for echo cancellation. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 241–244. [Google Scholar]
Figure 1. Structure of an adaptive filter for system identification.
Figure 1. Structure of an adaptive filter for system identification.
Electronics 12 03159 g001
Figure 2. NMSD learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS algorithm) for white Gaussian inputs. The unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Figure 2. NMSD learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS algorithm) for white Gaussian inputs. The unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Electronics 12 03159 g002
Figure 3. Simulation learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS) for AR 1 inputs. Unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Figure 3. Simulation learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS) for AR 1 inputs. Unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Electronics 12 03159 g003
Figure 4. Simulation learning curves of standard NLMS algorithms with various step sizes (JO-NLMS and the proposed NLMS algorithm) for AR 2 inputs. Unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Figure 4. Simulation learning curves of standard NLMS algorithms with various step sizes (JO-NLMS and the proposed NLMS algorithm) for AR 2 inputs. Unknown system coefficients changed at an iteration of 1.25 × 10 6 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Electronics 12 03159 g004
Figure 5. Selected optimal parameter μ * ( n ) r 2 + δ * ( n ) of JO-NLMS and the proposed NLMS for AR 2 inputs. (a) SNR = 20 dB, (b) SNR = 30 dB.
Figure 5. Selected optimal parameter μ * ( n ) r 2 + δ * ( n ) of JO-NLMS and the proposed NLMS for AR 2 inputs. (a) SNR = 20 dB, (b) SNR = 30 dB.
Electronics 12 03159 g005
Figure 6. (a) Acoustic echo path, (b) speech input.
Figure 6. (a) Acoustic echo path, (b) speech input.
Electronics 12 03159 g006
Figure 7. Simulation learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS) for speech inputs. Unknown system coefficients changed at an iteration of 6 × 10 5 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Figure 7. Simulation learning curves of standard NLMS algorithms with various step sizes (JOSR-NSAF, JO-NLMS, and the proposed NLMS) for speech inputs. Unknown system coefficients changed at an iteration of 6 × 10 5 . (a) SNR = 20 dB, (b) SNR = 30 dB.
Electronics 12 03159 g007
Table 1. The comparison of computational complexity.
Table 1. The comparison of computational complexity.
AlgorithmMultiplicationsDivisions
JO-NLMS 4 M + 7 3
JOSR-NSAF 3 M + ( M + 1 ) / N + ( N + 1 ) L + 5 2
Proposed NLMS algorithm 4 M + 7 3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yoo, J.; Park, B.Y.; Lee, W.I.; Shin, J. A Novel NLMS Algorithm for System Identification. Electronics 2023, 12, 3159. https://doi.org/10.3390/electronics12143159

AMA Style

Yoo J, Park BY, Lee WI, Shin J. A Novel NLMS Algorithm for System Identification. Electronics. 2023; 12(14):3159. https://doi.org/10.3390/electronics12143159

Chicago/Turabian Style

Yoo, Jinwoo, Bum Yong Park, Won Il Lee, and JaeWook Shin. 2023. "A Novel NLMS Algorithm for System Identification" Electronics 12, no. 14: 3159. https://doi.org/10.3390/electronics12143159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop