Next Article in Journal
Applications of Machine Learning and Computer Vision in Industry 4.0
Previous Article in Journal
Phenolic Profiles and Antitumor Activity against Colorectal Cancer Cells of Seeds from Selected Ribes Taxa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition

by
Jacob Benesty
1,
Constantin Paleologu
2,*,
Cristian-Lucian Stanciu
2,
Ruxandra-Liana Costea
3,
Laura-Maria Dogariu
2 and
Silviu Ciochină
2
1
INRS-EMT, University of Quebec, Montreal, QC H5A 1K6, Canada
2
Department of Telecommunications, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
3
Department of Electrical Engineering, National University of Science and Technology Politehnica Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2430; https://doi.org/10.3390/app14062430
Submission received: 27 November 2023 / Revised: 12 March 2024 / Accepted: 12 March 2024 / Published: 13 March 2024

Abstract

:
In linear system identification problems, the Wiener filter represents a popular tool and stands as an important benchmark. Nevertheless, it faces significant challenges when identifying long-length impulse responses. In order to address the related shortcomings, the solution presented in this paper is based on a third-order tensor decomposition technique, while the resulting sets of Wiener–Hopf equations are solved with the conjugate gradient (CG) method. Due to the decomposition-based approach, the number of coefficients (i.e., the parameter space of the filter) is greatly reduced, which results in operating with smaller data structures within the algorithm. As a result, improved robustness and accuracy can be achieved, especially in harsh scenarios (e.g., limited/incomplete sets of data and/or noisy conditions). Besides, the CG-based solution avoids matrix inversion operations, together with the related numerical and complexity issues. The simulation results are obtained in a network echo cancellation scenario and support the performance gain. In this context, the proposed iterative Wiener filter outperforms the conventional benchmark and also some previously developed counterparts that use matrix inversion or second-order tensor decompositions.

1. Introduction

Linear system identification problems have to be worked out in the context of various applications [1,2], including echo cancellation, active noise control, interference reduction, and channel modeling among others. A benchmark technique used to address such problems is the well-known Wiener filter, which basically relies on solving a linear system (namely the Wiener–Hopf equations) using a set of statistics. The Wiener–Hopf equations involve a set of estimates for the covariance matrix of the input signal and the cross-correlation vector between the input and reference sequences. The problem is formulated following an optimization criterion in terms of minimizing the mean-squared error (MSE), while the error is defined as the difference between the reference sequence and the output signal. The resulting optimal filter also represents an important basis for the development of other related tools for system identification problems, such as adaptive filtering algorithms [3,4].
There are some inherent limitations associated with the conventional Wiener filter solution, which is obtained by directly solving (using matrix inversion) the Wiener–Hopf equations. First, the accuracy of the solution is highly influenced by the accuracy of the statistics’ estimates. On the other hand, obtaining a reliable set of these estimates requires a large amount of data, i.e., much larger than the filter length. This could represent a significant shortcoming when dealing with a limited (or incomplete) amount of data and/or a long-length filter. Second, the external noise (that is part of the reference signal) biases the Wiener filter solution, which becomes less accurate when the signal-to-noise ratio (SNR) decreases. This could be the case in noisy environments, where different types of perturbations are likely to emerge. Third, the conventional solution involves the covariance matrix inversion, which is a very challenging operation in terms of both computational complexity and numerical accuracy [5,6]. The difficulty could increase significantly when operating with long-length filters, which further entail large dimension matrices.
Most of the previously discussed limitations are connected to the length of the filter, which could be very large in many scenarios. For example, in applications like echo cancellation and noise reduction [7], the acoustic impulse responses to be identified have hundreds/thousands of coefficients when using the common sampling rate of 8 or 16 kHz. Therefore, dealing with such long-length filters could lead to significant limitations in terms of both the accuracy and complexity of the solution. In order to reformulate such high-dimension system identification problems (with a large parameter space) more efficiently, a recently developed decomposition-based technique has been involved [8]. The main idea behind this technique is to exploit the low-rank feature of the system impulse response, in conjunction with its nearest Kronecker product (NKP) decomposition. As a result, a system identification problem featuring a large parameter space is designed as a combination of two shorter filters, with a significantly reduced number of coefficients. This further implies the operation with smaller matrices/vectors and, consequently, leads to improved robustness in terms of the accuracy of the final solution, even for the challenging cases mentioned above (e.g., a limited amount of data and/or low SNRs). Due to these important gains, the NKP-based approach has been involved in a wide range of applications, among which can be mentioned echo cancellation, adaptive beamforming, linear prediction, speech dereverberation, and microphone arrays, e.g., see [9,10,11,12,13,14,15,16,17] and the references therein.
Recently, the NKP technique has been applied in conjunction with a third-order tensor (TOT) decomposition of the impulse response [18], leading to a higher efficiency in terms of reducing the dimensionality of the system identification problem. This was not a straightforward extension of the low-rank approach presented in [8], since handling the rank of a tensor is a sensitive issue that usually involves different approximation techniques [19,20,21,22,23,24,25,26,27,28]. On the other hand, the solution proposed in [18] avoids such an approximation, by controlling and limiting the tensor rank to very small values. However, the resulting Wiener filter based on TOT decomposition solves the involved sets of Wiener–Hopf equations using the conventional approach, which relies on matrix inversion. Alternatively, different iterative techniques could be used to avoid such an operation [29,30,31], like the conjugate gradient (CG) method [32]. In [33], the CG algorithm has been applied in conjunction with the NKP-based technique from [8], showing improved performance. However, applying the CG method together with the TOT decomposition is a more challenging task, due to the particular connection between the three (shorter) component filters and the need for auxiliary variables within the algorithm.
Motivated by these aspects, in the current paper, we design an improved iterative version of the Wiener filter. The proposed algorithm involves the TOT-based decomposition, together with the CG method to solve three sets of Wiener–Hopf equations. As a result, it outperforms the counterpart version from [18], which uses the direct matrix inversion for solving the Wiener–Hopf equations, and also the CG-based solution from [33], which exploits the second-order NKP decomposition. Following this introduction, in Section 2 we provide some background on the conventional Wiener filter, the CG method (to avoid matrix inversion), and the TOT-based decomposition. Next, in Section 3, the proposed algorithm is developed. Simulation results provided in Section 4 support its performance and advantages compared to the existing solutions. The paper is summarized in Section 5, outlining the main conclusions and several perspectives for future works.

2. Conventional Wiener Filter, Conjugate Gradient Method, and Impulse Response Decomposition Based on a Third-Order Tensor

In this section, several backgrounds related to the upcoming developments are provided. First, we present the conventional Wiener filter for solving linear system identification problems. Next, the CG method is introduced as an efficient (iterative) alternative to avoid the matrix inversion required by the direct solution of Wiener–Hopf equations. Finally, the TOT-based decomposition of the impulse response is presented, outlining the main idea recently introduced in [18].
The main framework considered in this paper is related to a single-input single-output (SISO) linear system identification scenario, where all the involved signals are zero-mean and real-valued. In this context, the available signals are the input x ( t ) and the reference d ( t ) , while t represents the discrete-time index. Under this scenario, there is a correlation between these two sequences, since the reference signal is obtained at the output of an unknown system driven by the input signal, while the output is corrupted by an additive noise, as shown in Figure 1. Thus,
d ( t ) = h T x ( t ) + v ( t ) = y ( t ) + v ( t ) ,
where the vector h contains the L coefficients of the unknown impulse response (with superscript   T denoting transposition), x ( t ) = x ( t ) x ( t 1 ) x ( t L + 1 ) T is a vector that contains the L most recent time samples of the input signal x ( t ) , and v ( t ) is an additive noise, which is uncorrelated with x ( t ) . In the second line of (1), y ( t ) = h T x ( t ) represents the output signal.
Based on the correlation between the reference sequence and the input signal, and following the MSE optimization criterion, an estimate of h can be obtained by solving the Wiener–Hopf equations [2], i.e.,
R x h W = r x d ,
where
R x = E x ( t ) x T ( t ) ,
r x d = E x ( t ) d ( t )
represent the covariance matrix of the input signal and the cross-correlation vector between the input and reference sequences, respectively, h W contains the coefficients of the Wiener filter (i.e., L parameters), while E ( · ) denotes mathematical expectation. Thus, the conventional Wiener filter results by using the matrix inversion operation, so that
h W = R x 1 r x d .
In order to avoid matrix inversion, several alternative methods for solving (5) can be applied. The basic idea is to obtain the final solution in an iterative manner. In this context, the CG method [32] represents a popular choice, being included in the category of exact line search methods [34,35]. Hence, considering the initialization h W ( 0 ) , an initial residual z ( 0 ) = r x d R x h W ( 0 ) can be computed. Also, in the initial step, it requires a conjugate vector c ( 0 ) = z ( 0 ) and an auxiliary scalar γ ( 0 ) = z ( 0 ) T z ( 0 ) . Using this initialization, the CG algorithm runs for k steps, each one involving the relations:
q ( k ) = R x c ( k 1 ) ,
α ( k ) = γ ( k 1 ) c ( k 1 ) T q ( k ) ,
h W ( k ) = h W ( k 1 ) + α ( k ) c ( k 1 ) ,
z ( k ) = z ( k 1 ) α ( k ) q ( k ) ,
γ ( k ) = z ( k ) T z ( k ) ,
β ( k ) = γ ( k ) γ ( k 1 ) ,
c ( k ) = z ( k ) + β ( k ) c ( k 1 ) .
The stopping criterion can be related to a maximum number of updates (i.e., for k = 1 , 2 , , K ) or a predefined threshold for the residual.
The convergence of the CG algorithm is influenced by the condition number of R x [5], i.e., the larger this number, the slower the convergence is. In order to improve the convergence rate, a preconditioning procedure could be applied to this matrix. There are different methods for choosing the so-called preconditioner (i.e., a matrix that multiplies R x ), like Jacobi, Gauss–Seidel, etc. Basically, the algorithm from (6)–(12) is reformulated, while incorporating the preconditioning directly into the iteration. On the other hand, this procedure involves an additional computational amount. Nevertheless, the purpose of this paper is not to analyze the influence of different preconditioners on the overall performance of the CG algorithm. Our primary goal is to develop the decomposition-based approach in conjunction with the CG method for solving the Wiener–Hopf equations. In this context, the main challenges are related to the connection between the component filters and the specific initialization (as will be shown in the next section), and not to the performance of the CG algorithm itself. Consequently, in the following, the basic CG algorithm from (6)–(12) is considered without preconditioning. However, in order to keep the positive-definite character of the covariance matrix and to avoid any potential numerical/stability problems [6], it is recommended to add a very small positive constant to the elements of the main diagonal.
The maximum number of updates required by the Wiener filter using the CG method (namely WF-CG) to reach the solution of the conventional Wiener filter (WF) is generally much smaller compared to the filter length. This is supported in Figure 2 and Figure 3, where the performances of the conventional WF and WF-CG are analyzed in two different scenarios for the identification of a network echo path of length L = 512 (using a sampling rate of 8 kHz). This impulse response results from the first cluster of ITU-T G168 Recommendation [36] concerning digital network echo cancellers. It contains 64 coefficients padded with zeros up to the full-length L. The required statistics ( R x and r x d ) are estimated by averaging across N = M L data samples of x ( t ) and d ( t ) , with M > 1 . The reference signal is obtained according to (1), using a first-order autoregressive [AR(1)] process as input, which results from filtering white Gaussian noise through an AR(1) model; the pole of this model is set to 0.8. The additive noise is white and Gaussian, with SNR = σ y 2 / σ v 2 , where σ y 2 and σ v 2 stand for the variances of y ( t ) and v ( t ) , respectively. The results are shown using a common performance measure involved in system identification scenarios, which is the normalized misalignment (in dB). It is defined as 20 log 10 h h W 2 / h 2 (where · 2 denotes the Euclidean norm) and basically shows the “difference” between the true impulse response and the estimated one. The lower this quantity, the better the accuracy of the estimate.
In the first scenario considered in Figure 2, we evaluate the impact of using different amounts of data ( N = M L ) to estimate the statistics by varying the value of M. It can be noticed that a low amount of data significantly influences the accuracy of the Wiener solution. Nevertheless, the WF-CG converges toward the conventional WF after a small number of CG iterations (as compared to the filter length). Second, in Figure 3, the SNR influence is outlined. As expected, a lower SNR reduces the accuracy of the Wiener estimate. Similarly to the previous experiment, the WF-CG reaches the conventional WF for K L . Both analyzed scenarios support the influence of the main factors that affect the behavior of the Wiener filter, i.e., the amount of available data for estimating the statistics and the SNR level. Thus, it is of great importance to improve the overall performance and robustness related to these aspects.
In terms of computational complexity, the conventional Wiener solution based on matrix inversion requires O ( L 3 ) operations, while the iterative version that uses the CG method needs an O ( K L 2 ) amount, with K L . Nevertheless, when identifying a long-length impulse response, a large value of K could be required for the CG iterations. This also represents a motivation for the dimensionality reduction of the problem, by reformulating a system identification scenario with a large parameter space (i.e., a large number of coefficients) as a combination of the estimates provided by shorter filters.
In this regard, the recent solution from [18] is based on a third-order tensor decomposition of the impulse response, namely TOT decomposition. As a result, the final estimate results as a combination (via the Kronecker product) of the coefficients associated with three filters, which are significantly shorter (as compared to the original impulse response). This idea is briefly explained in the following. First, let us consider that the length of the filter can be factorized as L = L 1 L 2 , with L 1 L 2 , so that the impulse response of the system results in [8]
h = i = 1 L 2 h 2 i h 1 i .
Here, the shorter impulse responses h 1 i and h 2 i have the lengths L 1 and L 2 , respectively, while ⊗ denotes the Kronecker product [37]. At this point, let us assume that h 1 i is low rank [8]. Moreover, its length can be factorized as L 1 = L 11 L 12 (with L 11 L 12 ). Consequently, the global impulse response results in
h = i = 1 L 2 j = 1 P h 2 i h 12 i j h 11 i j ,
where the two impulse responses h 11 i j and h 12 i j have the lengths L 11 and L 12 , respectively, while P < L 12 . It can be noticed that the coefficients of h can be “rearranged” in the form of a third-order tensor, i.e.,
H = j = 1 P i = 1 L 2 h 11 i j h 12 i j h 2 i ,
where ∘ stands as the notation for the outer product. Furthermore, H is in fact a sum of P third-order tensors, each one of rank L 2 [19]. As indicated in [18], the recommended values for L 2 are small (e.g., 2 or 3).
Summarizing, the identification of the global impulse response h of length L (i.e., with L 11 L 12 L 2 coefficients) is transformed into a combination of three (shorter) sets of impulse responses, i.e., h 11 i j , h 12 i j , and h 2 i (for i = 1 , 2 , , L 2 and j = 1 , 2 , , P ). As a result, the new parameter space of the filter involves only P L 11 L 2 , P L 12 L 2 , and L 2 2 coefficients, respectively. Since usually P L 12 [18], the TOT-based decomposition approach leads to a significant dimensionality reduction, which is achieved especially when dealing with long-length filters (i.e., large values of L). While the conventional Wiener filter using matrix inversion involves a computational complexity proportional to O ( L 3 ) = O ( L 11 3 L 12 3 L 2 3 ) , the decomposition-based solution using the CG method combines the estimates of three shorter filters, which results in a computational complexity proportional to O ( P L 11 L 2 ) 2 + ( P L 12 L 2 ) 2 + L 2 4 , with P L 12 .

3. Iterative Wiener Filter Based on TOT and CG

The current section is dedicated to the development of the proposed solution, which results in the form of an iterative Wiener filter based on TOT decomposition and using the CG method for solving the associated sets of Wiener–Hopf equations. For this purpose and a better readability of the upcoming developments, several preliminary elements from [18] and the specific notation are presented at the beginning of this section. These preliminaries are related to the TOT-based decomposition framework and the associated Wiener–Hopf equations. Further, the proposed solution is developed. The differences between the version from [18] and the current proposal based on the CG method are mainly related to (i) the specific initialization that involves auxiliary matrices and (ii) the connection between the component filters from one CG cycle to another within the main iterations of the proposed CG-based Wiener filter. Moreover, since the impulse responses from (14) have different lengths, the CG cycles corresponding to the component filters use different numbers of updates.
As shown in [18], the estimates of the component impulse responses from (14) can be obtained based on a multilinear optimization approach [38,39]. In other words, two of the component impulse responses are considered fixed, while optimizing the third (remaining) one. This approach leads to three sets of Wiener–Hopf equations, i.e.,
G ¯ ̲ 12 , 11 T R x G ¯ ̲ 12 , 11 g ̲ 2 , W = G ¯ ̲ 12 , 11 T r x d ,
G ¯ ̲ 2 , 11 T R x G ¯ ̲ 2 , 11 g ¯ ̲ 12 , W = G ¯ ̲ 2 , 11 T r x d ,
G ¯ ̲ 2 , 12 T R x G ¯ ̲ 2 , 12 g ¯ ̲ 11 , W = G ¯ ̲ 2 , 12 T r x d .
The corresponding data structures and the associated notation are shown in Table 1, where g , W generally denotes the estimate of h from (14), while I L is the identity matrix of size L × L .
At this point, (16)–(18) are going to be solved with the CG method. The resulting solutions will then be sequentially iterated and combined (via the Kronecker product). Finally, the Wiener filter g W , which represents an estimate of h , will be obtained as
g W = i = 1 L 2 j = 1 P g 2 , W i g 12 , W i j g 11 , W i j ,
where g 2 , W i , g 12 , W i j , and g 11 , W i j are obtained from g ̲ 2 , W , g ¯ ̲ 12 , W , and g ¯ ̲ 11 , W , respectively. All these steps of the designed algorithm are detailed in the following.
As mentioned before, the developed iterative Wiener filter is based on the TOT decomposition of the global impulse response, while the CG updates are used to efficiently solve (16), (17), and (18), respectively. To this purpose, the main iterations of the Wiener filter are denoted as superscripts   ( n ) , while the CG updates appear in subscripts   ( k ) . The initialization of the algorithm concerns the three component filters, which are initially defined as
g ̲ 2 , W ( K 2 ) ( 0 ) = ϵ 0 L 2 2 1 T T ,
g ¯ ̲ 12 , W ( K 1 ) ( 0 ) = ϵ 0 P L 12 L 2 1 T T ,
g ¯ ̲ 11 , W ( K 1 ) ( 0 ) = ϵ 0 P L 11 L 2 1 T T ,
where K 1 and K 2 represent the maximum number of CG updates (for the component filters), ϵ is a very small positive number, while 0 denotes an all-zeros vector with the length indicated in subscript. The reason for using K 1 K 2 is that we are dealing with different lengths for the component filters. Among them, the length of g ̲ 2 , W (which has L 2 2 coefficients) could be significantly smaller, taking into account that g ¯ ̲ 12 , W and g ¯ ̲ 11 , W have P L 12 L 2 and P L 11 L 2 coefficients, respectively, while L 2 L 11 L 12 .
At this point, we need to introduce the auxiliary matrices (for i = 1 , 2 , , L 2 and j = 1 , 2 , , P ):
M 12 , 11 i j ( 0 ) = I L 2 ϵ 0 L 12 1 T T ϵ 0 L 11 1 T T ,
M 11 i j ( 0 ) = I L 12 ϵ 0 L 11 1 T T ,
which will further facilitate the definition of matrices G ¯ ̲ 12 , 11 and G ¯ ̲ 2 , 11 . Hence, in each main iteration   ( n ) of the algorithm, we first construct using (23):
G 12 , 11 i j ( n ) = M 12 , 11 i j ( n 1 ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P ,
G ¯ 12 , 11 i ( n ) = j = 1 P G 12 , 11 i j ( n ) , i = 1 , 2 , , L 2 ,
G ¯ ̲ 12 , 11 ( n ) = G ¯ 12 , 11 1 ( n ) G ¯ 12 , 11 2 ( n ) G ¯ 12 , 11 L 2 ( n ) .
These allow us to compute
R 2 ( n ) = G ¯ ̲ 12 , 11 ( n ) T R x G ¯ ̲ 12 , 11 ( n ) ,
r 2 ( n ) = G ¯ ̲ 12 , 11 ( n ) T r x d .
The structures from (28) and (29) are used to process (16) with the CG method. Consequently, using (20), the initial settings are
g ̲ 2 , W ( 0 ) ( n ) = g ̲ 2 , W ( K 2 ) ( n 1 ) ,
z 2 ( 0 ) ( n ) = r 2 ( n ) R 2 ( n ) g ̲ 2 , W ( 0 ) ( n ) ,
c 2 ( 0 ) ( n ) = z 2 ( 0 ) ( n ) ,
γ 2 ( 0 ) ( n ) = z 2 ( 0 ) ( n ) T z 2 ( 0 ) ( n ) .
Next, for k 2 = 1 , 2 , , K 2 , we perform similar to (6)–(12):
q 2 ( k 2 ) ( n ) = R 2 ( n ) c 2 ( k 2 1 ) ( n ) ,
α 2 ( k 2 ) ( n ) = γ 2 ( k 2 1 ) ( n ) c 2 ( k 2 1 ) ( n ) T q 2 ( k 2 ) ( n ) ,
g ̲ 2 , W ( k 2 ) ( n ) = g ̲ 2 , W ( k 2 1 ) ( n ) + α 2 ( k 2 ) ( n ) c 2 ( k 2 1 ) ( n ) ,
z 2 ( k 2 ) ( n ) = z 2 ( k 2 1 ) ( n ) α 2 ( k 2 ) ( n ) q 2 ( k 2 ) ( n ) ,
γ 2 ( k 2 ) ( n ) = z 2 ( k 2 ) ( n ) T z 2 ( k 2 ) ( n ) ,
β 2 ( k 2 ) ( n ) = γ 2 ( k 2 ) ( n ) γ 2 ( k 2 1 ) ( n ) ,
c 2 ( k 2 ) ( n ) = z 2 ( k 2 ) ( n ) + β 2 ( k 2 ) ( n ) c 2 ( k 2 1 ) ( n ) .
The final solution g ̲ 2 , W ( K 2 ) ( n ) will represent the initialization for the CG cycle associated with this filter [similar to (30)] in the next main iteration of the algorithm. Also, it is decomposed as
g ̲ 2 , W ( K 2 ) ( n ) = g 2 , W ( K 2 ) 1 ( n ) T g 2 , W ( K 2 ) 2 ( n ) T g 2 , W ( K 2 ) L 2 ( n ) T T ,
which further allows the evaluation of
G 2 , 11 i j ( n ) = g 2 , W ( K 2 ) i ( n ) M 11 i j ( n 1 ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P ,
G ¯ 2 , 11 i ( n ) = G 2 , 11 i 1 ( n ) G 2 , 11 i 2 ( n ) G 2 , 11 i P ( n ) , i = 1 , 2 , , L 2 ,
G ¯ ̲ 2 , 11 ( n ) = G ¯ 2 , 11 1 ( n ) G ¯ 2 , 11 2 ( n ) G ¯ 2 , 11 L 2 ( n ) ,
so that we can compute
R 12 ( n ) = G ¯ ̲ 2 , 11 ( n ) T R x G ¯ ̲ 2 , 11 ( n ) ,
r 12 ( n ) = G ¯ ̲ 2 , 11 ( n ) T r x d .
The notation from (45) and (46) is used to process (17) with the CG updates. Hence, in this step, we follow the initial settings from (21), so that
g ¯ ̲ 12 , W ( 0 ) ( n ) = g ¯ ̲ 12 , W ( K 1 ) ( n 1 ) ,
z 12 ( 0 ) ( n ) = r 12 ( n ) R 12 ( n ) g ¯ ̲ 12 , W ( 0 ) ( n ) ,
c 12 ( 0 ) ( n ) = z 12 ( 0 ) ( n ) ,
γ 12 ( 0 ) ( n ) = z 12 ( 0 ) ( n ) T z 12 ( 0 ) ( n ) .
Consequently, the CG cycle for the second filter is defined by the relations:
q 12 ( k 1 ) ( n ) = R 12 ( n ) c 12 ( k 1 1 ) ( n ) ,
α 12 ( k 1 ) ( n ) = γ 12 ( k 1 1 ) ( n ) c 12 ( k 1 1 ) ( n ) T q 12 ( k 1 ) ( n ) ,
g ¯ ̲ 12 , W ( k 1 ) ( n ) = g ¯ ̲ 12 , W ( k 1 1 ) ( n ) + α 12 ( k 1 ) ( n ) c 12 ( k 1 1 ) ( n ) ,
z 12 ( k 1 ) ( n ) = z 12 ( k 1 1 ) ( n ) α 12 ( k 1 ) ( n ) q 12 ( k 1 ) ( n ) ,
γ 12 ( k 1 ) ( n ) = z 12 ( k 1 ) ( n ) T z 12 ( k 1 ) ( n ) ,
β 12 ( k 1 ) ( n ) = γ 12 ( k 1 ) ( n ) γ 12 ( k 1 1 ) ( n ) ,
c 12 ( k 1 ) ( n ) = z 12 ( k 1 ) ( n ) + β 12 ( k 1 ) ( n ) c 12 ( k 1 1 ) ( n ) ,
for k 1 = 1 , 2 , , K 1 . The final solution g ¯ ̲ 12 , W ( K 1 ) ( n ) will represent the initial setting in the next main iteration of the algorithm [similar to (47)]. The decomposition of this impulse response is performed in two steps, i.e.,
g ¯ ̲ 12 , W ( K 1 ) ( n ) = g ¯ 12 , W ( K 1 ) 1 ( n ) T g ¯ 12 , W ( K 1 ) 2 ( n ) T g ¯ 12 , W ( K 1 ) L 2 ( n ) T T ,
g ¯ 12 , W ( K 1 ) i ( n ) = g 12 , W ( K 1 ) i 1 ( n ) T g 12 , W ( K 1 ) i 2 ( n ) T g 12 , W ( K 1 ) i P ( n ) T T , i = 1 , 2 , , L 2 .
At this point, having the components from (41) and (59), we continue with the development associated with the last component filter, starting with the evaluation of
G 2 , 12 i j ( n ) = g 2 , W ( K 2 ) i ( n ) g 12 , W ( K 1 ) i j ( n ) I L 11 , i = 1 , 2 , , L 2 , j = 1 , 2 , , P ,
G ¯ 2 , 12 i ( n ) = G 2 , 12 i 1 ( n ) G 2 , 12 i 2 ( n ) G 2 , 12 i P ( n ) , i = 1 , 2 , , L 2 ,
G ¯ ̲ 2 , 12 ( n ) = G ¯ 2 , 12 1 ( n ) G ¯ 2 , 12 2 ( n ) G ¯ 2 , 12 L 2 ( n ) .
Therefore, introducing the notation:
R 11 ( n ) = G ¯ ̲ 2 , 12 ( n ) T R x G ¯ ̲ 2 , 12 ( n ) ,
r 11 ( n ) = G ¯ ̲ 2 , 12 ( n ) T r x d ,
we can further process (18) with the CG method. To this purpose, the initialization relies on (22), so that the settings are
g ¯ ̲ 11 , W ( 0 ) ( n ) = g ¯ ̲ 11 , W ( K 1 ) ( n 1 ) ,
z 11 ( 0 ) ( n ) = r 11 ( n ) R 11 ( n ) g ¯ ̲ 11 , W ( 0 ) ( n ) ,
c 11 ( 0 ) ( n ) = z 11 ( 0 ) ( n ) ,
γ 11 ( 0 ) ( n ) = z 11 ( 0 ) ( n ) T z 11 ( 0 ) ( n ) .
Thus, for k 1 = 1 , 2 , , K 1 , the CG cycle for the third filter consists of the relations:
q 11 ( k 1 ) ( n ) = R 11 ( n ) c 11 ( k 1 1 ) ( n ) ,
α 11 ( k 1 ) ( n ) = γ 11 ( k 1 1 ) ( n ) c 11 ( k 1 1 ) ( n ) T q 11 ( k 1 ) ( n ) ,
g ¯ ̲ 11 , W ( k 1 ) ( n ) = g ¯ ̲ 11 , W ( k 1 1 ) ( n ) + α 11 ( k 1 ) ( n ) c 11 ( k 1 1 ) ( n ) ,
z 11 ( k 1 ) ( n ) = z 11 ( k 1 1 ) ( n ) α 11 ( k 1 ) ( n ) q 11 ( k 1 ) ( n ) ,
γ 11 ( k 1 ) ( n ) = z 11 ( k 1 ) ( n ) T z 11 ( k 1 ) ( n ) ,
β 11 ( k 1 ) ( n ) = γ 11 ( k 1 ) ( n ) γ 11 ( k 1 1 ) ( n ) ,
c 11 ( k 1 ) ( n ) = z 11 ( k 1 ) ( n ) + β 11 ( k 1 ) ( n ) c 11 ( k 1 1 ) ( n ) .
The decomposition of the final solution g ¯ ̲ 11 , W ( K 1 ) ( n ) results in
g ¯ ̲ 11 , W ( K 1 ) ( n ) = g ¯ 11 , W ( K 1 ) 1 ( n ) T g ¯ 11 , W ( K 1 ) 2 ( n ) T g ¯ 11 , W ( K 1 ) L 2 ( n ) T T ,
g ¯ 11 , W ( K 1 ) i ( n ) = g 11 , W ( K 1 ) i 1 ( n ) T g 11 , W ( K 1 ) i 2 ( n ) T g 11 , W ( K 1 ) i P ( n ) T T , i = 1 , 2 , , L 2 ,
and provides the final elements for evaluating the estimated impulse response based on (19). Also, g ¯ ̲ 11 , W ( K 1 ) ( n ) represents the initialization for the next main iteration of the algorithm, according to (65).
Summarizing, using (41), (59), and (77), we obtain
g W ( n ) = i = 1 L 2 j = 1 P g 2 , W ( K 2 ) i ( n ) g 12 , W ( K 1 ) i j ( n ) g 11 , W ( K 1 ) i j ( n ) .
Finally, using the same components, we evaluate the auxiliary matrices (for i = 1 , 2 , , L 2 and j = 1 , 2 , , P ):
M 12 , 11 i j ( n ) = I L 2 g 12 , W ( K 1 ) i j ( n ) g 11 , W ( K 1 ) i j ( n ) ,
M 11 i j ( n ) = I L 12 g 11 , W ( K 1 ) i j ( n ) .
These will be used in the next main iteration of the algorithm, in order to compute the structures from (25) and (42), respectively.
The resulting iterative Wiener filter (IWF) is based on TOT decomposition and uses the CG method, which will be referred to as IWF-TOT-CG. Its main steps are summarized in Table 2, while the CG cycles for solving (16)–(18) are detailed in Table 3.

4. Simulation Results

The experimental setup is based on a network echo cancellation scenario [36], as previously described in Section 2, related to the results reported in Figure 2 and Figure 3. The experiments were performed using MATLAB R2018b (for programming and graphic representations), running on a GIGABYTE AORUS 15G XC device (Windows 10 OS) sourced by GYGABYTE, Taipei, Taiwan, which has an Intel Core i7-10870H CPU with 8 Cores, 16 Logical Processors (@2.21 GHz base speed), and 32 GB of RAM. As a performance measure, the normalized misalignment (in dB) is involved in all the following experiments. Summarizing, the main goal is to identify an impulse response h of length L = 512 (corresponding to a network echo path), while the input signal x ( t ) is an AR(1) process. In this context, the reference signal d ( t ) is obtained based on (1), using a white Gaussian additive noise v ( t ) with different SNRs. Specifically, three SNR levels are used, i.e., 20 dB, 10 dB, and 0 dB. The first one corresponds to good SNR conditions, where the noise level is mild, thus expecting a good accuracy of the Wiener filter. Second, using SNR = 10 dB corresponds to moderate noisy conditions, while noticing the influence in reducing the accuracy of the solution (i.e., an increase in the misalignment). Finally, heavy noise conditions are considered when SNR = 0 dB, critically influencing the reliability of the Wiener filter.

4.1. IWF-TOT-CG versus WF-CG

In the first set of simulations, the performance of the proposed IWF-TOT-CG is analyzed according to its main parameters, i.e., K 1 , K 2 , and P. The benchmark algorithm involved in comparisons is the WF-CG, using K = 50 CG updates. As shown in Figure 2 and Figure 3, this value of K is sufficient for the WF-CG to reach the conventional WF solution. Since L = 512 , the decomposition of the IWF-TOT-CG is performed using L 11 = L 12 = 16 and L 2 = 2 .
In Figure 4, the influence of K 1 is assessed, i.e., the maximum number of CG updates used for the two longer filters within the IWF-TOT-CG. Since the third filter (of length L 2 2 = 4 ) is very short and the corresponding CG parameter ( K 2 ) should be much smaller than its length, it is natural to use the smallest value K 2 = 1 . In this simulation, the decomposition parameter of the IWF-TOT-CG is set to P = 2 ; the influence of this parameter will be analyzed in an upcoming experiment. Also, the amount of data available for estimating the statistics is N = 5 L , while SNR = 20 dB. These represent good conditions for the WF-CG (i.e., the comparing algorithm) to obtain a reliable estimate in terms of accuracy. Nevertheless, as we can notice from Figure 4, the proposed IWF-TOT-CG reaches a significantly lower misalignment level (i.e., a better accuracy) for all the values of K 1 . A larger value of this parameter leads to a faster convergence rate, but up to a certain limit. On the other hand, a larger number of iterations also increases the computational complexity in terms of the number of operations. Consequently, a compromise should be made. As we can notice from Figure 4, increasing the value of K 1 to more than 12 (e.g., 14) does not lead to performance improvements. Besides, the difference between the cases K 1 = 10 and K 1 = 12 is not so apparent (as compared to the difference between K 1 = 8 and K 1 = 10 ), slightly improving the convergence rate, while reaching the same misalignment level (i.e., accuracy). In this context, using K 1 = 10 or 12 are reasonable choices.
A similar experiment is considered in Figure 5, but using a fixed value for the first CG parameter (i.e., K 1 = 10 ) and two different values for K 2 . These represent the minimum and maximum values for this parameter, i.e., K 2 = 1 and K 2 = 4 , respectively. The maximum value is associated with the length of the corresponding filter ( L 2 2 ). The other conditions remain the same as in the previous simulation. Under these circumstances, it can be noticed in Figure 5 that a higher value of K 2 does not significantly influence the overall performance of the IWF-TOT-CG so it is natural to use K 2 = 1 in the following experiments. The experiment from Figure 5 was performed to assess the influence of K 2 , i.e., the number of CG iterations for the shorter filter. We should note that similar conclusions can be obtained when using other values of K 1 (for the two longer filters).
Next, the influence of the decomposition parameter P is analyzed in Figure 6. Based on the previous experiments, the setup used for the IWF-TOT-CG is K 1 = 12 and K 2 = 1 , while N = 5 L and SNR = 20 dB. As shown in Section 2, the decomposition parameter is chosen such that P L 12 , relying on the low-rank feature of the impulse response. For all the values of P considered in Figure 6, the IWF-TOT-CG outperforms the WF-CG. Even the minimum value P = 1 produces a reasonable attenuation of the normalized misalignment, showing improved accuracy as compared to the benchmark algorithm. Moreover, we can notice that increasing the value of P beyond a certain value does not improve the overall performance, which also certifies the low-rank approach.
Finally, the last experiment from this first set concerns the influence of different conditions on the performance of the proposed IWF-TOT-CG, as compared to the WF-CG. To this purpose, in Figure 7, several scenarios are considered using different amounts of data for estimating the statistics and lower SNRs. The IWF-TOT-CG uses the same CG parameters as in the previous simulation, while P = 1 . This represents a very advantageous setup as compared to the WF-CG counterpart. While the benchmark algorithm involves a single filter of length L = 512 , the proposed version combines the estimates provided by three filters of lengths P L 11 L 2 , P L 12 L 2 , and L 2 2 , which have 32, 32, and 4 coefficients, respectively. Consequently, there is an important reduction in terms of the parameter space, using only 68 coefficients instead of 512. As a result, due to the significantly smaller data structures used within the IWF-TOT-CG, the proposed algorithm is much more robust in harsh conditions, as compared to the WF-CG. This is supported in Figure 7, where the IWF-TOT-CG outperforms the WF-CG algorithm, especially when using a low amount of data for estimating the statistics (e.g., N = L in Figure 7b) and/or in noisy environments (e.g., SNR = 10 or 0 dB in Figure 7c and d, respectively).

4.2. IWF-TOT-CG versus IWF-TOT

The second set of experiments focuses on the comparison between the proposed IWF-TOT-CG and its counterpart recently developed in [18], namely IWF-TOT. This iterative Wiener filter is also based on TOT decomposition but uses direct matrix inversion to solve the associated sets of Wiener–Hopf equations. Therefore, in terms of their decomposition, both algorithms will use the same setup, i.e., L = L 11 L 12 L 2 , with L 11 = L 12 = 16 and L 2 = 2 . Besides, the IWF-TOT-CG involves its specific CG cycles, using the settings K 1 = 12 and K 2 = 1 (as in the previous set of simulations).
In Figure 8, the two TOT-based algorithms are compared for different values of the decomposition parameter P under favorable conditions, i.e., N = 5 L data samples (to estimate the statistics) and SNR = 20 dB. While for P = 1 the performances are very similar, the CG-based version achieves a better accuracy for a larger value of P. This supports the advantage of the line search methods (like CG), as compared to the traditional matrix inversion approach, which is also indicated in other previous works [33,34,35]. As shown in the experiment related to Figure 6, increasing the value of P beyond a certain value does not lead to performance improvement, while increasing the computational complexity on the other hand. In fact, this trade-off relies on the low-rank approach. As indicated in [18], for network impulse responses, the rank of the corresponding matrices is much lower than L 12 , e.g., usually less than L 12 / 5 . In our scenario, this leads to a range for P between 1 and 3. This was also previously supported in Figure 6, where we can notice that the misalignment curves for P = 2 and P = 3 are very similar. Consequently, there is no reason for choosing P beyond these values, since there is no performance gain while paying in terms of computational complexity.
The robustness of the line search methods in noisy environments is also an important feature to be considered. The experiment provided in Figure 9 outlines this gain. As we can notice, even for P = 1 (which led to similar results in good SNR conditions), the IWF-TOT-CG performs better as compared to the previous IWF-TOT [18] in low SNR environments.

4.3. IWF-TOT-CG versus IWF-NKP-CG

The last set of experiments aims to evaluate the Wiener filters based on the CG method in conjunction with the decomposition-based approaches. The proposed IWF-TOT-CG is compared with a recently designed version of the iterative Wiener filter [33]. This algorithm (namely IWF-NKP-CG) also uses the CG method to solve the associated Wiener–Hopf equations, but it relies on the second-order NKP decomposition, thus following the initial approach from [8]. In this case, the length of the filter is factorized as L = L 1 * L 2 * , while the low-rank approach relies on the decomposition parameter P * < L 2 * . In our scenario that considers an impulse response of length L = 512 , this decomposition is performed using L 1 * = 32 and L 2 * = 16 . The IWF-NKP-CG involves two sets of Wiener–Hopf equations, which correspond to two component Wiener filters of lengths P * L 1 * and P * L 2 * . Their solutions are obtained based on the CG method, using a maximum number of updates (denoted by K * ). Since in the involved setup, we have L 2 * = L 12 , it is reasonable to use K * = K 1 .
In Figure 10, the required statistics are obtained by averaging across N = 5 L available data samples, while SNR = 20 dB. These represent reasonably good conditions, which favor reliable estimates. Nevertheless, the IWF-NKP-CG [33] is outperformed by the proposed IWF-TOT-CG for different values of their decomposition parameters. This performance gain results from the TOT decomposition compared to the second-order NKP-based approach, which further supports the initial findings from [18]. For this experiment, the values of P and P * were selected according to the low-rank approach. Concerning the values of P (for IWF-TOT-CG), these considerations were previously indicated related to Figure 6 and Figure 8. Similar aspects can be outlined when choosing the values of P * (for IWF-NKP-CG). As recently indicated in [33] and previously supported in [8], for network impulse responses, the rank of the corresponding matrix of size L 1 * × L 2 * (where L 1 * × L 2 * = L ) associated with the reshaped impulse response is much lower than L 2 * , e.g., usually less than L 2 * / 6 . In this context, it is reasonable to consider P * < 3 in Figure 10, in order to properly address the trade-off between performance and complexity. Otherwise, using larger values of P * will not justify the performance gain, while increasing the computational amount.
Finally, the IWF-NKP-CG [33] and the proposed IWF-TOT-CG (using P * = P = 1 ) are compared in more challenging conditions, when using smaller amounts of data for estimating the required statistics, i.e., N = 2 L and N = L . As we can notice in Figure 11, the performance gain of the IWF-TOT-CG is more apparent in these cases, which shows the robustness of the TOT-based decomposition in conjunction with the CG method.

5. Conclusions and Future Works

In this paper, we have developed an iterative version of the Wiener filter using the CG method and exploiting a tensorial decomposition of the impulse response. The resulting IWF-TOT-CG combines the solutions of three sets of Wiener–Hopf equations which are solved with CG updates, thus avoiding matrix inversion operations. An important gain is related to the dimensionality reduction in a long-length system identification problem, which can be reformulated using a reduced set of coefficients, corresponding to three (much) shorter filters. This approach fits very well with the identification of low-rank impulse responses, like in echo cancellation.
In terms of its performance, the proposed IWF-TOT-CG outperforms the conventional Wiener filter, especially in some challenging scenarios. Here, we can mention the cases when dealing with a small amount of available data (to estimate the statistics) or working in low SNR environments. While the accuracy of the conventional Wiener filter is highly affected in such conditions, the proposed version (which operates with smaller data structures) is still robust and provides reliable solutions. Moreover, the IWF-TOT-CG performs better as compared to the previously developed IWF-TOT [18], which involves matrix inversion operations to solve the Wiener–Hopf equations. Also, the proposed algorithm provides improved performance as compared to its counterpart based on the second-order NKP decomposition, i.e., the IWF-NKP-CG [33].
Future works will focus on three main directions. First, we can exploit other line search methods to solve the Wiener–Hopf equations, like those based on the coordinate descent technique [34,35]. A comparison between these methods is beyond the scope of this paper; however, this could open the path toward using inexact line search methods in conjunction with decomposition-based algorithms. Among them, we can mention the dichotomous coordinate descent technique [40,41,42,43,44], which is very appealing in terms of computational efficiency. Second, another direction for future works targets the extension to higher-order tensors, which could lead to improved decomposition-based solutions and higher dimensionality reduction. Third, it would be highly useful to extend the decomposition-based approach and the tensorial framework to other potential solutions used for system identification problems, like the Kalman filter and different adaptive filtering algorithms. These developments could be further used in real-world applications, like echo cancellation, active noise control, and interference reduction.

Author Contributions

Conceptualization, J.B.; methodology, C.P.; software, C.-L.S.; validation, R.-L.C.; formal analysis, L.-M.D.; investigation, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS–UEFISCDI, project number PN-III-P4-PCE-2021-0438, within PNCDI III, and by a grant from the National Program for Research of the National Association of Technical Universities—GNAC ARUT 2023 (no. 36/09.10.2023, code 133).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  2. Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  3. Sayed, A.H. Adaptive Filters; Wiley: New York, NY, USA, 2008. [Google Scholar]
  4. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  5. Golub, G.H.; Loan, C.F.V. Matrix Computations, 3rd ed.; The John Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  6. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion; SIAM: Philadelphia, PA, USA, 1998. [Google Scholar]
  7. Hänsler, E.; Schmidt, G. Acoustic Echo and Noise Control–A Practical Approach; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  8. Paleologu, C.; Benesty, J.; Ciochină, S. Linear system identification based on a Kronecker product decomposition. IEEE/ACM Trans. Audio Speech Lang. Process. 2018, 26, 1793–1808. [Google Scholar] [CrossRef]
  9. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based normalized least mean square algorithm. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 476–480. [Google Scholar]
  10. Bhattacharjee, S.S.; Kumar, K.; George, N.V. Nearest Kronecker product decomposition based generalized maximum correntropy and generalized hyperbolic secant robust adaptive filters. IEEE Signal Process. Lett. 2020, 27, 1525–1529. [Google Scholar] [CrossRef]
  11. Yang, W.; Huang, G.; Chen, J.; Benesty, J.; Cohen, I.; Kellermann, W. Robust dereverberation with Kronecker product based multichannel linear prediction. IEEE Signal Process. Lett. 2021, 28, 101–105. [Google Scholar] [CrossRef]
  12. Bhattacharjee, S.S.; George, N.V. Fast and efficient acoustic feedback cancellation based on low rank approximation. Signal Process. 2021, 182, 107984. [Google Scholar] [CrossRef]
  13. Bhattacharjee, S.S.; George, N.V. Nearest Kronecker product decomposition based linear-in-the-parameters nonlinear filters. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 2111–2122. [Google Scholar] [CrossRef]
  14. Wang, X.; Benesty, J.; Chen, J.; Huang, G.; Cohen, I. Beamforming with cube microphone arrays via Kronecker product decompositions. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1774–1784. [Google Scholar] [CrossRef]
  15. Huang, G.; Benesty, J.; Cohen, I.; Chen, J. Kronecker product multichannel linear filtering for adaptive weighted prediction error-based speech dereverberation. IEEE/ACM Trans. Audio Speech Lang. Process. 2022, 30, 1277–1289. [Google Scholar] [CrossRef]
  16. Vadhvana, S.; Yadav, S.K.; Bhattacharjee, S.S.; George, N.V. An improved constrained LMS algorithm for fast adaptive beamforming based on a low rank approximation. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 3605–3609. [Google Scholar] [CrossRef]
  17. Bhattacharjee, S.S.; Patel, V.; George, N.V. Nonlinear spline adaptive filters based on a low rank approximation. Signal Process. 2022, 201, 108726. [Google Scholar] [CrossRef]
  18. Benesty, J.; Paleologu, C.; Ciochină, S. Linear system identification based on a third-order tensor decomposition. IEEE Signal Process. Lett. 2023, 30, 503–507. [Google Scholar] [CrossRef]
  19. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  20. Comon, P. Tensors: A brief introduction. IEEE Signal Process. Mag. 2014, 31, 44–53. [Google Scholar] [CrossRef]
  21. Vervliet, N.; Debals, O.; Sorber, L.; Lathauwer, L.D. Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 2014, 31, 71–79. [Google Scholar] [CrossRef]
  22. Friedland, S.; Tammali, V. Low-rank approximation of tensors. In Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory; Benner, P., Bollhöfer, M., Kressner, D., Mehl, C., Stykel, T., Eds.; Springer: Cham, Switzerland, 2015; pp. 377–411. [Google Scholar]
  23. Cichocki, A.; Mandic, D.P.; Phan, A.; Caiafa, C.F.; Zhou, G.; Zhao, Q.; Lathauwer, L.D. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  24. Becker, H.; Albera, L.; Comon, P.; Gribonval, R.; Wendling, F.; Merlet, I. Brain-source imaging: From sparse to tensor models. IEEE Signal Process. Mag. 2015, 32, 100–112. [Google Scholar] [CrossRef]
  25. Bousse, M.; Debals, O.; Lathauwer, L.D. A tensor-based method for large-scale blind source separation using segmentation. IEEE Trans. Signal Process. 2017, 65, 346–358. [Google Scholar] [CrossRef]
  26. Sidiropoulos, N.; Lathauwer, L.D.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  27. Chang, S.Y.; Wei, Y. Generalized T-product tensor Bernstein bounds. Ann. Appl. Math. 2022, 38, 25–61. [Google Scholar]
  28. Bozorgmanesh, H.; Hajarian, M.; Chronopoulos, A.T. The relation between a tensor and its associated semi-symmetric form. Numer. Math. Theory Methods Appl. 2022, 15, 530–564. [Google Scholar] [CrossRef]
  29. Zheng, L.; Yang, L.; Liang, Y. A conjugate gradient projection method for solving equations with convex constraints. J. Comput. Appl. Math. 2020, 375, 112781. [Google Scholar] [CrossRef]
  30. Damale, P.U.; Chong, E.K.P.; Scharf, L.L. Wiener filtering without covariance matrix inversion. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023. 5p. [Google Scholar]
  31. Damale, P.U.; Chong, E.K.P.; Scharf, L.L. Wiener filter approximations without covariance matrix inversion. IEEE Open J. Signal Process. 2023, 4, 366–374. [Google Scholar] [CrossRef]
  32. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  33. Stanciu, C.L.; Benesty, J.; Paleologu, C.; Costea, R.L.; Dogariu, L.M.; Ciochină, S. Decomposition-based Wiener filter using the Kronecker product and conjugate gradient method. IEEE/ACM Trans. Audio Speech Lang. Process. 2024, 32, 124–138. [Google Scholar] [CrossRef]
  34. Zakharov, Y.V.; Albu, F. Coordinate descent iterations in fast affine projection algorithm. IEEE Signal Process. Lett. 2005, 12, 353–356. [Google Scholar] [CrossRef]
  35. Zakharov, Y.V.; White, G.P.; Liu, J. Low-complexity RLS algorithms using dichotomous coordinate descent iterations. IEEE Trans. Signal Process. 2008, 56, 3150–3161. [Google Scholar] [CrossRef]
  36. Digital Network Echo Cancellers. ITU-T Recommendation G.168. 2012. Available online: www.itu.int/rec/T-REC-G.168 (accessed on 11 March 2024).
  37. Loan, C.F.V. The ubiquitous Kronecker product. J. Comput. Appl. Math. 2000, 123, 85–100. [Google Scholar] [CrossRef]
  38. Bertsekas, D.P. Nonlinear Programming, 2nd ed.; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  39. Rupp, M.; Schwarz, S. A tensor LMS algorithm. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia, 19–24 April 2015; pp. 3347–3351. [Google Scholar]
  40. Zakharov, Y.V.; Nascimento, V.H. DCD-RLS adaptive filters with penalties for sparse identification. IEEE Trans. Signal Process. 2013, 61, 3198–3213. [Google Scholar] [CrossRef]
  41. Kim, G.; Lee, H.; Chung, J.; Lee, J. A delay relaxed RLS-DCD algorithm for real-time implementation. IEEE Trans. Circuits Syst. II Express Briefs 2018, 65, 61–65. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Wu, T.; Zakharov, Y.V.; Li, J. MMP-DCD-CV based sparse channel estimation algorithm for underwater acoustic transform domain communication system. Appl. Acoust. 2019, 154, 43–52. [Google Scholar] [CrossRef]
  43. Yu, Y.; Lu, L.; Zheng, Z.; Wang, W.; Zakharov, Y.V.; de Lamare, R.C. DCD-based recursive adaptive algorithms robust against impulsive noise. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1359–1363. [Google Scholar] [CrossRef]
  44. Liao, M.; Zakharov, Y.V. DCD-based joint sparse channel estimation for OFDM in virtual angular domain. IEEE Access 2021, 9, 102081–102090. [Google Scholar] [CrossRef]
Figure 1. The reference signal obtained in a SISO scenario.
Figure 1. The reference signal obtained in a SISO scenario.
Applsci 14 02430 g001
Figure 2. Normalized misalignment of the conventional WF and WF-CG using different amounts of data to estimate R x and r x d . These statistics are obtained by averaging across N = M L data samples (with L = 512 and different values of M), while SNR = 20 dB.
Figure 2. Normalized misalignment of the conventional WF and WF-CG using different amounts of data to estimate R x and r x d . These statistics are obtained by averaging across N = M L data samples (with L = 512 and different values of M), while SNR = 20 dB.
Applsci 14 02430 g002
Figure 3. Normalized misalignment of the conventional WF and WF-CG for different SNRs. The estimates of the statistics R x and r x d are obtained by averaging across N = 5 L data samples, where L = 512 .
Figure 3. Normalized misalignment of the conventional WF and WF-CG for different SNRs. The estimates of the statistics R x and r x d are obtained by averaging across N = 5 L data samples, where L = 512 .
Applsci 14 02430 g003
Figure 4. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with P = 2 , K 2 = 1 , and using different values of K 1 . The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Figure 4. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with P = 2 , K 2 = 1 , and using different values of K 1 . The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Applsci 14 02430 g004
Figure 5. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with P = 2 , K 1 = 10 , and using different values of K 2 . The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Figure 5. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with P = 2 , K 1 = 10 , and using different values of K 2 . The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Applsci 14 02430 g005
Figure 6. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with K 1 = 12 , K 2 = 1 , and using different values of P. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Figure 6. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with K 1 = 12 , K 2 = 1 , and using different values of P. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Applsci 14 02430 g006
Figure 7. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with K 1 = 12 , K 2 = 1 , P = 1 , and using different amounts of data ( N = M L , with L = 512 ) to estimate the required statistics, under different SNR conditions. (a) M = 2 and SNR = 20 dB; (b) M = 1 and SNR = 20 dB; (c) M = 2 and SNR = 10 dB and (d) M = 5 and SNR = 0 dB.
Figure 7. Normalized misalignment of the WF-CG (after K = 50 CG updates) and IWF-TOT-CG with K 1 = 12 , K 2 = 1 , P = 1 , and using different amounts of data ( N = M L , with L = 512 ) to estimate the required statistics, under different SNR conditions. (a) M = 2 and SNR = 20 dB; (b) M = 1 and SNR = 20 dB; (c) M = 2 and SNR = 10 dB and (d) M = 5 and SNR = 0 dB.
Applsci 14 02430 g007
Figure 8. Normalized misalignment of the IWF-TOT [18] and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using different values of P. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Figure 8. Normalized misalignment of the IWF-TOT [18] and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using different values of P. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Applsci 14 02430 g008
Figure 9. Normalized misalignment of the IWF-TOT [18] and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using P = 1 . The required statistics are obtained by averaging across N = 5 L data samples (with L = 512 ), while (a) SNR = 10 dB and (b) SNR = 0 dB.
Figure 9. Normalized misalignment of the IWF-TOT [18] and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using P = 1 . The required statistics are obtained by averaging across N = 5 L data samples (with L = 512 ), while (a) SNR = 10 dB and (b) SNR = 0 dB.
Applsci 14 02430 g009
Figure 10. Normalized misalignment of the IWF-NKP-CG [33] (with K * = 12 ) and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using different values of P * and P, respectively. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Figure 10. Normalized misalignment of the IWF-NKP-CG [33] (with K * = 12 ) and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using different values of P * and P, respectively. The required statistics are obtained by averaging across N = 5 L data samples, L = 512 , and SNR = 20 dB.
Applsci 14 02430 g010
Figure 11. Normalized misalignment of the IWF-NKP-CG [33] (with K * = 12 ) and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using P * = P = 1 , while SNR = 20 dB. The required statistics are obtained by averaging across N = M L data samples (with L = 512 ), where (a) M = 2 and (b) M = 1 .
Figure 11. Normalized misalignment of the IWF-NKP-CG [33] (with K * = 12 ) and IWF-TOT-CG (with K 1 = 12 and K 2 = 1 ) using P * = P = 1 , while SNR = 20 dB. The required statistics are obtained by averaging across N = M L data samples (with L = 512 ), where (a) M = 2 and (b) M = 1 .
Applsci 14 02430 g011
Table 1. Specific Data Structures and Notation Related to Wiener–Hopf equations with TOT Decomposition.
Table 1. Specific Data Structures and Notation Related to Wiener–Hopf equations with TOT Decomposition.
Data structures and notation from   ( 16 ) ̲ : G ¯ ̲ 12 , 11 = G ¯ 12 , 11 1 G ¯ 12 , 11 2 G ¯ 12 , 11 L 2 , where G ¯ 12 , 11 i = j = 1 P G 12 , 11 i j , i = 1 , 2 , , L 2 , with G 12 , 11 i j = I L 2 g 12 , W i j g 11 i j , W , i = 1 , 2 , , L 2 , j = 1 , 2 , , P g ̲ 2 , W = g 2 , W 1 T g 2 , W 2 T g 2 , W L 2 T T Data structures and notation from   ( 17 ) ̲ : G ¯ ̲ 2 , 11 = G ¯ 2 , 11 1 G ¯ 2 , 11 2 G ¯ 2 , 11 L 2 , where G ¯ 2 , 11 i = G 2 , 11 i 1 G 2 , 11 i 2 G 2 , 11 i P , i = 1 , 2 , , L 2 with G 2 , 11 i j = g 2 , W i I L 12 g 11 , W i j , i = 1 , 2 , , L 2 , j = 1 , 2 , , P g ¯ ̲ 12 , W = g ¯ 12 , W 1 T g ¯ 12 , W 2 T g ¯ 12 , W L 2 T T where g ¯ 12 , W i = g 12 , W i 1 T g 12 , W i 2 T g 12 , W i P T T , i = 1 , 2 , , L 2 Data structures and notation from   ( 18 ) ̲ : G ¯ ̲ 2 , 12 = G ¯ 2 , 12 1 G ¯ 2 , 12 2 G ¯ 2 , 12 L 2 , where G ¯ 2 , 12 i = G 2 , 12 i 1 G 2 , 12 i 2 G 2 , 12 i P , i = 1 , 2 , , L 2 , with G 2 , 12 i j = g 2 , W i g 12 , W i j I L 11 , i = 1 , 2 , , L 2 , j = 1 , 2 , , P g ¯ ̲ 11 , W = g ¯ 11 , W 1 T g ¯ 11 , W 2 T g ¯ 11 , W L 2 T T , where g ¯ 11 , W i = g 11 , W i 1 T g 11 , W i 2 T g 11 , W i P T T , i = 1 , 2 , , L 2
Table 2. Iterative Wiener Filter Based on a Third-Order Tensor Decomposition and Using the CG Method (IWF-TOT-CG).
Table 2. Iterative Wiener Filter Based on a Third-Order Tensor Decomposition and Using the CG Method (IWF-TOT-CG).
Data : ̲ R x , r x d ( estimated statistics based on N data samples ) L = L 11 L 12 L 2 , L 12 L 11 , L 2 L 11 L 12 , P < L 12 Initialization : ̲ g ̲ 2 , W ( K 2 ) ( 0 ) = ϵ 0 L 2 2 1 T T , 0 < ϵ 1 g ¯ ̲ 12 , W ( K 1 ) ( 0 ) = ϵ 0 P L 12 L 2 1 T T g ¯ ̲ 11 , W ( K 1 ) ( 0 ) = ϵ 0 P L 11 L 2 1 T T for i = 1 , 2 , , L 2 , j = 1 , 2 , , P : M 12 , 11 i j ( 0 ) = I L 2 ϵ 0 L 12 1 T T ϵ 0 L 11 1 T T M 11 i j ( 0 ) = I L 12 ϵ 0 L 11 1 T T For ̲ n = 1 , 2 , : G 12 , 11 i j ( n ) = M 12 , 11 i j ( n 1 ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P G ¯ 12 , 11 i ( n ) = j = 1 P G 12 , 11 i j ( n ) , i = 1 , 2 , , L 2 G ¯ ̲ 12 , 11 ( n ) = G ¯ 12 , 11 1 ( n ) G ¯ 12 , 11 2 ( n ) G ¯ 12 , 11 L 2 ( n ) g ̲ 2 , W ( 0 ) ( n ) = g ̲ 2 , W ( K 2 ) ( n 1 ) Solve   ( 16 ) ( see Table 3 ) CG ( K 2 ) g ̲ 2 , W ( K 2 ) ( n ) ( 41 ) g 2 , W ( K 2 ) i ( n ) , i = 1 , 2 , , L 2 G 2 , 11 i j ( n ) = g 2 , W ( K 2 ) i ( n ) M 11 i j ( n 1 ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P G ¯ 2 , 11 i ( n ) = G 2 , 11 i 1 ( n ) G 2 , 11 i 2 ( n ) G 2 , 11 i P ( n ) , i = 1 , 2 , , L 2 G ¯ ̲ 2 , 11 ( n ) = G ¯ 2 , 11 1 ( n ) G ¯ 2 , 11 2 ( n ) G ¯ 2 , 11 L 2 ( n ) g ¯ ̲ 12 , W ( 0 ) ( n ) = g ¯ ̲ 12 , W ( K 1 ) ( n 1 ) Solve   ( 17 ) ( see Table 3 ) CG ( K 1 ) g ¯ ̲ 12 , W ( K 1 ) ( n ) ( 58 ) , ( 59 ) g 12 , W ( K 1 ) i j ( n ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P G 2 , 12 i j ( n ) = g 2 , W ( K 2 ) i ( n ) g 12 , W ( K 1 ) i j ( n ) I L 11 , i = 1 , 2 , , L 2 , j = 1 , 2 , , P G ¯ 2 , 12 i ( n ) = G 2 , 12 i 1 ( n ) G 2 , 12 i 2 ( n ) G 2 , 12 i P ( n ) , i = 1 , 2 , , L 2 G ¯ ̲ 2 , 12 ( n ) = G ¯ 2 , 12 1 ( n ) G ¯ 2 , 12 2 ( 1 ) G ¯ 2 , 12 L 2 ( n ) g ¯ ̲ 11 , W ( 0 ) ( n ) = g ¯ ̲ 11 , W ( K 1 ) ( n 1 ) Solve   ( 18 ) ( see Table 3 ) CG ( K 1 ) g ¯ ̲ 11 , W ( K 1 ) ( n ) ( 76 ) , ( 77 ) g 11 , W ( K 1 ) i j ( n ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P M 12 , 11 i j ( n ) = I L 2 g 12 , W ( K 1 ) i j ( n ) g 11 , W ( K 1 ) i j ( n ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P M 11 i j ( n ) = I L 12 g 11 , W ( K 1 ) i j ( n ) , i = 1 , 2 , , L 2 , j = 1 , 2 , , P g W ( n ) = i = 1 L 2 j = 1 P g 2 , W ( K 2 ) i ( n ) g 12 , W ( K 1 ) i j ( n ) g 11 , W ( K 1 ) i j ( n )
Table 3. CG Solutions of the Wiener–Hopf Equations within IWF-TOT-CG.
Table 3. CG Solutions of the Wiener–Hopf Equations within IWF-TOT-CG.
Solution of   ( 16 ) ̲ : R 2 ( n ) = G ¯ ̲ 12 , 11 ( n ) T R x G ¯ ̲ 12 , 11 ( n ) , r 2 ( n ) = G ¯ ̲ 12 , 11 ( n ) T r x d g ̲ 2 , W ( 0 ) ( n ) = g ̲ 2 , W ( K 2 ) ( n 1 ) , z 2 ( 0 ) ( n ) = r 2 ( n ) R 2 ( n ) g ̲ 2 , W ( 0 ) ( n ) , c 2 ( 0 ) ( n ) = z 2 ( 0 ) ( n ) , γ 2 ( 0 ) ( n ) = z 2 ( 0 ) ( n ) T z 2 ( 0 ) ( n ) For k 2 = 1 , 2 , , K 2 : q 2 ( k 2 ) ( n ) = R 2 ( n ) c 2 ( k 2 1 ) ( n ) , α 2 ( k 2 ) ( n ) = γ 2 ( k 2 1 ) ( n ) c 2 ( k 2 1 ) ( n ) T q 2 ( k 2 ) ( n ) g ̲ 2 , W ( k 2 ) ( n ) = g ̲ 2 , W ( k 2 1 ) ( n ) + α 2 ( k 2 ) ( n ) c 2 ( k 2 1 ) ( n ) z 2 ( k 2 ) ( n ) = z 2 ( k 2 1 ) ( n ) α 2 ( k 2 ) ( n ) q 2 ( k 2 ) ( n ) γ 2 ( k 2 ) ( n ) = z 2 ( k 2 ) ( n ) T z 2 ( k 2 ) ( n ) , β 2 ( k 2 ) ( n ) = γ 2 ( k 2 ) ( n ) γ 2 ( k 2 1 ) ( n ) c 2 ( k 2 ) ( n ) = z 2 ( k 2 ) ( n ) + β 2 ( k 2 ) ( n ) c 2 ( k 2 1 ) ( n ) Solution of   ( 17 ) ̲ : R 12 ( n ) = G ¯ ̲ 2 , 11 ( n ) T R x G ¯ ̲ 2 , 11 ( n ) , r 12 ( n ) = G ¯ ̲ 2 , 11 ( n ) T r x d g ¯ ̲ 12 , W ( 0 ) ( n ) = g ¯ ̲ 12 , W ( K 1 ) ( n 1 ) , z 12 ( 0 ) ( n ) = r 12 ( n ) R 12 ( n ) g ¯ ̲ 12 , W ( 0 ) ( n ) , c 12 ( 0 ) ( n ) = z 12 ( 0 ) ( n ) , γ 12 ( 0 ) ( n ) = z 12 ( 0 ) ( n ) T z 12 ( 0 ) ( n ) For k 1 = 1 , 2 , , K 1 : q 12 ( k 1 ) ( n ) = R 12 ( n ) c 12 ( k 1 1 ) ( n ) , α 12 ( k 1 ) ( n ) = γ 12 ( k 1 1 ) ( n ) c 12 ( k 1 1 ) ( n ) T q 12 ( k 1 ) ( n ) g ¯ ̲ 12 , W ( k 1 ) ( n ) = g ¯ ̲ 12 , W ( k 1 1 ) ( n ) + α 12 ( k 1 ) ( n ) c 12 ( k 1 1 ) ( n ) z 12 ( k 1 ) ( n ) = z 12 ( k 1 1 ) ( n ) α 12 ( k 1 ) ( n ) q 12 ( k 1 ) ( n ) γ 12 ( k 1 ) ( n ) = z 12 ( k 1 ) ( n ) T z 12 ( k 1 ) ( n ) , β 12 ( k 1 ) ( n ) = γ 12 ( k 1 ) ( n ) γ 12 ( k 1 1 ) ( n ) c 12 ( k 1 ) ( n ) = z 12 ( k 1 ) ( n ) + β 12 ( k 1 ) ( n ) c 12 ( k 1 1 ) ( n ) Solution of   ( 18 ) ̲ : R 11 ( n ) = G ¯ ̲ 2 , 12 ( n ) T R x G ¯ ̲ 2 , 12 ( n ) , r 11 ( n ) = G ¯ ̲ 2 , 12 ( n ) T r x d g ¯ ̲ 11 , W ( 0 ) ( n ) = g ¯ ̲ 11 , W ( K 1 ) ( n 1 ) , z 11 ( 0 ) ( n ) = r 11 ( n ) R 11 ( n ) g ¯ ̲ 11 , W ( 0 ) ( n ) , c 11 ( 0 ) ( n ) = z 11 ( 0 ) ( n ) , γ 11 ( 0 ) ( n ) = z 11 ( 0 ) ( n ) T z 11 ( 0 ) ( n ) For k 1 = 1 , 2 , , K 1 : q 11 ( k 1 ) ( n ) = R 11 ( n ) c 11 ( k 1 1 ) ( n ) , α 11 ( k 1 ) ( n ) = γ 11 ( k 1 1 ) ( n ) c 11 ( k 1 1 ) ( n ) T q 11 ( k 1 ) ( n ) g ¯ ̲ 11 , W ( k 1 ) ( n ) = g ¯ ̲ 11 , W ( k 1 1 ) ( n ) + α 11 ( k 1 ) ( n ) c 11 ( k 1 1 ) ( n ) z 11 ( k 1 ) ( n ) = z 11 ( k 1 1 ) ( n ) α 11 ( k 1 ) ( n ) q 11 ( k 1 ) ( n ) γ 11 ( k 1 ) ( n ) = z 11 ( k 1 ) ( n ) T z 11 ( k 1 ) ( n ) , β 11 ( k 1 ) ( n ) = γ 11 ( k 1 ) ( n ) γ 11 ( k 1 1 ) ( n ) , c 11 ( k 1 ) ( n ) = z 11 ( k 1 ) ( n ) + β 11 ( k 1 ) ( n ) c 11 ( k 1 1 ) ( n )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Benesty, J.; Paleologu, C.; Stanciu, C.-L.; Costea, R.-L.; Dogariu, L.-M.; Ciochină, S. Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition. Appl. Sci. 2024, 14, 2430. https://doi.org/10.3390/app14062430

AMA Style

Benesty J, Paleologu C, Stanciu C-L, Costea R-L, Dogariu L-M, Ciochină S. Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition. Applied Sciences. 2024; 14(6):2430. https://doi.org/10.3390/app14062430

Chicago/Turabian Style

Benesty, Jacob, Constantin Paleologu, Cristian-Lucian Stanciu, Ruxandra-Liana Costea, Laura-Maria Dogariu, and Silviu Ciochină. 2024. "Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition" Applied Sciences 14, no. 6: 2430. https://doi.org/10.3390/app14062430

APA Style

Benesty, J., Paleologu, C., Stanciu, C. -L., Costea, R. -L., Dogariu, L. -M., & Ciochină, S. (2024). Wiener Filter Using the Conjugate Gradient Method and a Third-Order Tensor Decomposition. Applied Sciences, 14(6), 2430. https://doi.org/10.3390/app14062430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop