2.1.3. IFFTc

The corrected interpolated FFT algorithm, presented in [16–18], is based on an IFFT-2p, but includes further processing to correct the effects of the harmonic interference between spectral components. Concerning the multi-tone signal of (1), it has been shown that the DFT value closest to the peak of the *i*-th spectral component can be written as: *<sup>X</sup>*(*ki*) = *Vi <sup>S</sup> <sup>W</sup>*(−*δi*) + *Fi*, where *Vi* <sup>=</sup> *Ai* <sup>2</sup>*<sup>j</sup> <sup>e</sup>jφ<sup>i</sup>* , *<sup>S</sup>* <sup>=</sup> <sup>∑</sup>*N*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *w*(*n*), and the contribution of the harmonic interference of other components on the *i*-th one can be taken into account by the term *Fi*. Similar considerations can be made for the second strongest bin: *<sup>X</sup>*(*ki* <sup>+</sup> *i*) = *Vi <sup>S</sup> W*(*<sup>i</sup>* − *δi*) + *Bi*.

The *α<sup>i</sup>* becomes: *α <sup>i</sup>* <sup>=</sup> <sup>|</sup>*W*(*i*−*δi*)<sup>|</sup> <sup>|</sup>*W*(−*δi*)<sup>|</sup> <sup>=</sup> <sup>|</sup>*X*(*ki*+*i*)−*Bi*<sup>|</sup> <sup>|</sup>*X*(*ki*)−*Fi*<sup>|</sup> . The correction factors, *Fi* and *Bi*, depend on the frequency, amplitude, and phase of the signal tones. The proposed solution consists of using the values of frequency, amplitude, and phase measured with a preliminary two-point IFFT to evaluate the correction factors (IFFTc). In the presence of a low-frequency tone, the frequency image contribution can be corrected with the same relationships [20]. This step could be iterated further, but without any significant improvement in terms of estimation error reduction.

#### *2.2. Parametric Methods*

Numerous parametric methods exist in the literature; however, in this article, only the three algorithms presenting the best compromise in terms of computational requirements and estimation performance have been considered—MUSIC, ESPRIT, and IWPA.

#### 2.2.1. MUSIC

This parametric algorithm (multiple signal classification) [19–21] determines the frequencies of the tones in a signal by performing a decomposition of the covariance matrix of the sequence of signal samples, *x*(*n*). We modelled the input data as a *Ns*-tone signal and a superimposed noise, as follows:

$$\mathbf{x}(n) = \sum\_{i=i}^{N\_s} A\_i \sin(2\pi f\_i n T\_s + \phi\_i) + z(n),\tag{7}$$

where *<sup>z</sup>*(*n*) is the noise signal. The covariance of the signal is *Rx* = *<sup>E</sup>*{*xxH*}, and can be numerically computed using signal samples, *x*[*n*]. If the noise is considered to be white Gaussian noise, then the signal can be decomposed in order to separate the signal from the noise orthogonal subspaces. The frequencies of the signal tones can be estimated from this decomposition [19]. To compute the MUSIC algorithm, the number of signal tones, *Ns*, must be known in advance; the same applies to the number of signals eigenvectors to be found with the decomposition.
