**1. Introduction**

The Prony method is a popular tool used to recover the functions represented in sparse expansions using one generating function. For example, the function with the following form

$$f(x) = \sum\_{j=1}^{M} c\_j e^{ix\phi\_j} \tag{1}$$

can be recovered from 2*M* equispaced sampling values *f*(*lh*), *l* = 0, ..., 2*M* − 1 for an appropriate positive constant *h*; however, in many real-world applications, we need to deal with the functions represented by more than one generating functions. For example, the *harmonic signals* with the form

$$f(\mathbf{x}) = \sum\_{j=1}^{M} \left( c\_j \cos(\phi\_j \mathbf{x}) + d\_j \sin(\beta\_j \mathbf{x}) \right), \tag{2}$$

are generated by two generating functions (or simply generators): cos(*φx*) and sin(*βx*), where *φ* and *β* are generic parameters used as the placeholders for the real parameters {*φj*}*<sup>M</sup> <sup>j</sup>*=<sup>1</sup> and {*βj*}*<sup>M</sup> <sup>j</sup>*=<sup>1</sup> to generate the specific terms in the expansion. In this system, we have two sets of coefficients {*cj*}*<sup>M</sup> <sup>j</sup>*=<sup>1</sup> and {*dj*}*<sup>M</sup> <sup>j</sup>*=<sup>1</sup> and two sets of frequencies {*φj*}*<sup>M</sup> <sup>j</sup>*=<sup>1</sup> and {*βj*}*<sup>M</sup> <sup>j</sup>*=1. Analogous to the original Prony method, we expect to use 4*M* equispaced sampling values *f*(*lh*), *l* = 0, ..., 4*M* − 1 to recover those four sets of parameters.

There are some existing methods to solve this problem. The first one is to convert it to a single-generator problem by the following formulas

$$\cos x = \frac{1}{2} (e^{i\mathbf{x}} + e^{-i\mathbf{x}}) \quad \text{and} \quad \sin x = \frac{1}{2i} (e^{i\mathbf{x}} - e^{-i\mathbf{x}}) \dots$$

269

**Citation:** Hussen, A.; He, W. Prony Method for Two-Generator Sparse Expansion Problem. *Math. Comput. Appl.* **2022**, *27*, 60. https://doi.org/ 10.3390/mca27040060

Academic Editor: Gianluigi Rozza

Received: 23 March 2022 Accepted: 11 July 2022 Published: 15 July 2022

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

which results in problem (1) (see [1]). Another way using the same idea is based on the *even/odd* properties for cos *x* and sin *x* (see [2]) as follows

$$f(\mathbf{x}) + f(-\mathbf{x}) = 2\sum\_{j=1}^{M} c\_j \cos(\phi\_j \mathbf{x}).\tag{3}$$

However, this approach is very restrictive, because the chance that one can make this kind of conversion is very small. In this paper, we are interested in solving the *general* two-generator sparse expansion problem by a new way of generalized Prony method. More specifically, we study the functions with the following two-generator sparse expansion

$$f(\mathbf{x}) = \sum\_{j=1}^{M\_1} c\_j u(\boldsymbol{\phi}\_j \mathbf{x}) + \sum\_{l=1}^{M\_2} d\_l \boldsymbol{\nu}(\boldsymbol{\beta}\_l \mathbf{x}),\tag{4}$$

where *u*(*φx*) and *v*(*βx*) are two different functions used as the generators. In order to make the Prony method work, we need a critical condition for our special technique: *There exists a linear operator, such that u*(*φx*) *and v*(*βx*) *are both eigenfunctions of this operator.*

Another situation that could lead to the two-generator expansion problem is when we apply some special transforms on a sparse expansion. For example, when we apply the *short time Fourier transform* (STFT), i.e.,

$$\text{STFT}\{f(\mathbf{x})\}(\omega,\mathbf{r}) = \int\_{-\infty}^{\infty} f(\mathbf{x}) w(\mathbf{x} - \mathbf{r}) e^{-i\omega \mathbf{x}} d\mathbf{x} \tag{5}$$

using the Gaussian window function *w*(*x*) = <sup>√</sup><sup>1</sup> 2*π e* <sup>−</sup> *<sup>x</sup>*<sup>2</sup> <sup>2</sup>*σ*<sup>2</sup> on the sparse cosine expansion

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j \cos(\phi\_j \mathbf{x}),\tag{6}$$

we would obtain a two-generator sparse expansion as follows,

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j \mathbf{e}^{-\beta(\phi\_j - \mathbf{x})^2} + \sum\_{j=1}^{M} c\_j \mathbf{e}^{-\beta(\phi\_j + \mathbf{x})^2}. \tag{7}$$

In this example, the two generators are *e*−*β*(*φ*−*x*)<sup>2</sup> and *e*−*β*(*φ*+*x*)<sup>2</sup> with *β* = 0. Actually, the original single-generator problem (6) can be solved directly. For example, one can convert cos(*φx*) to <sup>1</sup> <sup>2</sup> (*eiφ<sup>x</sup>* + *<sup>e</sup>*−*iφx*) (see [1]), or use a method based on the Chebyshev polynomials (see [3]). When we solve problem (6) directly, we use the sampling values in the time domain; when we solve the problem in the form of (7), we use the sampling values in the frequency domain. (See [4] for a discussion on sampling values in the frequency domain.) In this paper, we use this example to study the special properties of the twogenerator sparse expansion problem.

Since the signals could take various forms, not necessarily in the exponential form studied in the classical Prony method, many researchers generalized the Prony method to handle different types of signals. For example, many results in [1,3,5–12] have been developed over the last few years. In particular, Peter and Plonka in [1,8] generalized the Prony method to reconstruct M-sparse expansions in terms of eigenfunctions of some special linear operators. In [3], Plonka and others reconstructed different signals by exploiting the generalized shift operator. These results provide us the building blocks for our method in this paper.

We organize our presentation in the remaining sections as follows. In Section 2, we quickly review the classical Prony method and one of its generalizations for the Gaussian generating function to establish the foundation of our method. In Section 3, we describe the details of our method using the example with two generators: cosine and sine functions. In Section 4, we apply our method on two different types of Gaussian generating functions, so that we can study an interesting property: *When the Hankel matrix for finding the coefficients of the Prony polynomial is singular, what does it really mean*? In Section 5, we show two examples that correspond to the two problems solved in Sections 3 and 4, respectively. Finally, we make conclusions in Section 6 and describe two related research problems to be solved in the future.

#### **2. Review of the Prony Method and One of Its Generalizations**

Our method is built on top of the Prony method and one of its generalizations. Before we present our technique, we review these basic methods.

#### *2.1. Classical Prony Method*

Let *f*(*x*) be a function in the form of

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j e^{-i\mathbf{x}\boldsymbol{\phi}\_j} \tag{8}$$

with *<sup>M</sup>* ≥ 1. Then the coefficients {*cj*}*<sup>M</sup>* <sup>1</sup> and the frequencies {*φj*}*<sup>M</sup>* <sup>1</sup> can be recovered from the sampling values *f*(*lh*), *l* = 0, ..., 2*M* − 1, where *h* is some positive constant. To solve this problem, a special polynomial called the Prony polynomial can help us convert the relatively hard *non-linear* problem (8) to two *linear* problems and a *simple non-linear* problem (finding zeros of a polynomial). The Prony polynomial for (8) is defined as

$$\Lambda(z) = \prod\_{j=1}^{M} (z - e^{-ih\phi\_j}) = \sum\_{l=0}^{M} \lambda\_l z^l,\tag{9}$$

where *λl*, *l* = 0, ..., *M* are the coefficients of the monomial terms in (9) with the leading coefficient *λ<sup>M</sup>* = 1. The technique is based on the following critical property:

$$\sum\_{l=0}^{M} \lambda\_l f(h(l+m)) = \sum\_{l=0}^{M} \lambda\_l \sum\_{j=1}^{M} c\_j e^{-ih(l+m)\phi\_j} = \sum\_{j=1}^{M} c\_j e^{-ihm\phi\_j} \underbrace{\sum\_{l=0}^{M} \lambda\_l e^{-ihl\phi\_l}}\_{=0} = 0 \tag{10}$$

for any *m* = 0, 1, . . . , *M* − 1, which can be written as the following linear system

$$\left[f(h(l+m))\right]\_{l,m=0}^{M-1} \begin{bmatrix} \lambda\_0\\ \vdots\\ \lambda\_{M-1} \end{bmatrix} = -\begin{bmatrix} f(hM)\\ \vdots\\ f(h(2M-1)) \end{bmatrix}.\tag{11}$$

The coefficient vector *λ* = [*λ*0, *λ*1,..., *λM*−1] *<sup>T</sup>* can be calculated from the 2*M* sampling values *f*(*lh*), *l* = 0, ..., 2*M* − 1. The linear system (11) is guaranteed to have a unique solution under the condition that all *<sup>φ</sup>j*'s are distinct in (−*K*, *<sup>K</sup>*) <sup>⊂</sup> <sup>R</sup> for some *<sup>K</sup>* <sup>&</sup>gt; 0 (with *h* in the range 0 < *h* < *<sup>π</sup> <sup>K</sup>* ), and *<sup>c</sup>*1, ... , *cM* are nonzero in <sup>C</sup>, which is a natural requirement for problem (8). This property is a direct result of the following matrix factorization

$$\left[f(h(l+m))\right]\_{l,m=0}^{M-1} = V^T \text{diag}(c\_1, \dots, c\_M) V\_{\prime} \tag{12}$$

where *V* := [*e* <sup>−</sup>*ilhφ<sup>j</sup>* ] *l*=*M*−1,*j*=*M <sup>l</sup>*=0,*j*=<sup>1</sup> is a Vandermonde matrix, which is non-singular for distinct *φj*'s and *hφ<sup>j</sup>* ∈ (−*π*, *π*] for *j* = 1, ..., *M*. The frequencies can be extracted from the zeros of Λ(*z*) (in the form of *zj* = *e* <sup>−</sup>*ihφj*) using the formula

$$\phi\_j = \frac{-\operatorname{Im}(\ln(z\_j))}{h}, \quad j = 1, \dots, M. \tag{13}$$

Finally, the coefficients *cj*, *j* = 1, ..., *M* can be determined by solving the following *overdetermined* linear system (with *M* unknowns and 2*M* equations)

$$f(lh) = \sum\_{j=1}^{M} c\_j e^{-ilh\phi\_j}, \quad l = 0, \dots, 2M - 1. \tag{14}$$

The redundant equations in this overdetermined linear system will play a critical role in our two-generator method to help us separate the frequencies associated with the two generators (see Section 3).

#### *2.2. Sparse Expansions on Shifted Gaussian*

In order to solve the two-generator sparse expansion problem (7), we need to apply the technique presented in [3], which solves a single-generator sparse expansion problem with the following form

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j e^{-\beta(\mathbf{x} - \phi\_j)^2} \, \, \, \, \, \tag{15}$$

where *<sup>β</sup>* <sup>∈</sup> <sup>C</sup>\{0}. The technique relies on the following generalized shift operator

$$\mathcal{S}\_{\mathcal{K},h}f(\mathbf{x}) = \mathcal{K}(\mathbf{x},h)f(\mathbf{x}+h),\tag{16}$$

where *h* = 0, and *K*(·, ·) has the property

$$K(\mathbf{x}, h\_1 + h\_2) = K(\mathbf{x}, h\_1)K(\mathbf{x} + h\_1, h\_2) = K(\mathbf{x}, h\_2)K(\mathbf{x} + h\_2, h\_1).$$

The *K*(*x*, *h*) function in (16) is chosen to be *eβh*(2*x*+*h*), so that we have the following critical property

$$(\mathcal{S}\_{K,lr}e^{-\beta(\phi-\cdot)^2})(x) = e^{2\beta\phi\hbar}e^{-\beta(\phi-x)^2},\tag{17}$$

which means that *e* −*β*(*φj*−*x*) 's are eigenfunctions of <sup>S</sup>*K*,*<sup>h</sup>* for all *<sup>φ</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>.

The sparse expansion *f*(*x*) in (15) can be reconstructed using 2*M* sampling values *<sup>f</sup>*(*x*<sup>0</sup> <sup>+</sup> *hk*), *<sup>k</sup>* <sup>=</sup> 0, ..., 2*<sup>M</sup>* <sup>−</sup>1, and *<sup>x</sup>*<sup>0</sup> is an arbitrary real number. If Re *<sup>β</sup>* <sup>=</sup> 0, then *<sup>h</sup>* <sup>∈</sup> <sup>R</sup>\{0}; while if Re *<sup>β</sup>* <sup>=</sup> 0, then 0 <sup>&</sup>lt; *<sup>h</sup>* <sup>≤</sup> *<sup>π</sup>* <sup>2</sup>|Im*β*|*<sup>L</sup>* with *<sup>φ</sup><sup>j</sup>* <sup>∈</sup> (−*L*, *<sup>L</sup>*) for *<sup>j</sup>* <sup>=</sup> 1, ..., *<sup>M</sup>* for some given *<sup>L</sup>*. (See [3].) The Prony polynomial for the problem in (15) can be defined as:

$$\Lambda(z) := \prod\_{j=1}^{M} (z - e^{2h\beta\phi\_j}) = \sum\_{l=0}^{M} \lambda\_l z^l \tag{18}$$

with *λ<sup>M</sup>* = 1. Then, we have the following linear system

$$\sum\_{l=0}^{M-1} \lambda\_l e^{\theta l(l+m)(2x\_0 + h(l+m))} f(\mathbf{x}\_0 + h(l+m)) = -e^{\theta h(m+M)(2x\_0 + h(m+M))} f(\mathbf{x}\_0 + h(m+M)) \tag{19}$$

for *m* = 0, 1, ..., *M* − 1, which can be represented as an inhomogeneous system

$$H\lambda = -\mathcal{G}\_{\prime} \tag{20}$$

where *G* := (S*K*,(*M*+*m*)*<sup>h</sup> f*)(*x*0) *M*−<sup>1</sup> *<sup>m</sup>*=<sup>0</sup> , and *<sup>H</sup>* :<sup>=</sup> (S*K*,(*l*+*m*)*<sup>h</sup> f*)(*x*<sup>0</sup> + (*l* + *m*)*h*) *M*−<sup>1</sup> *<sup>l</sup>*,*m*=0. This *H* matrix is a Hankel matrix, and it has the following structure

$$\begin{split} H := \left[ (\mathbb{S}\_{\mathbf{K}, (l+m)h} f)(\mathbf{x}\_0 + (l+m)h) \right]\_{l, m=0}^{M-1} &= \left[ \mathcal{K}(\mathbf{x}\_{0\prime}(l+m)h) f(\mathbf{x}\_0 + (l+m)h) \right]\_{l, m=0}^{M-1} \\ &= V \text{diag}(c\_j e^{-\beta(\phi\_j - \mathbf{x}\_0)^2}) V^T, \end{split} \tag{21}$$

with the Vandermonde matrix

$$\mathcal{V} := \begin{bmatrix} 1 & 1 & \dots & 1 \\ e^{2\beta\hbar\Phi\_1} & e^{2\beta\hbar\Phi\_2} & \dots & e^{2\beta\hbar\Phi\_M} \\ \vdots & \vdots & \dots & \vdots \\ e^{2(M-1)\beta\hbar\Phi\_1} & e^{2(M-1)\beta\hbar\Phi\_2} & \dots & e^{2(M-1)\beta\hbar\Phi\_M} \end{bmatrix}$$

Thus, *<sup>H</sup>* is invertible for distinct *<sup>φ</sup>j*'s in (−*L*, *<sup>L</sup>*) <sup>⊂</sup> <sup>R</sup> for *<sup>L</sup>* <sup>&</sup>gt; 0, and the vector of the coefficients *λ* := [*λ*0, ..., *λM*−1] *<sup>T</sup>* are obtained by solving the system (20), which can be used to calculate the parameters {*φj*}'s.

Finally, the coefficients *cj*'s in the expansion (15) can be computed by solving the following overdetermined linear system:

$$f(\mathbf{x}\_0 + l\mathbf{h}) = \sum\_{j=1}^{M} c\_j e^{-\beta(\mathbf{x}\_0 - \phi\_j + l\mathbf{h})^2}, \quad l = 0, \dots, 2M - 1. \tag{22}$$

.

#### **3. The Sparse Expansion Problem with Two Generators: Cosine and Sine**

In this section, we present our method for solving the two-generator sparse expansion problem in the following form

$$f(\mathbf{x}) = \sum\_{j=1}^{M\_1} c\_j \cos(\phi\_j \mathbf{x}) + \sum\_{l=1}^{M\_2} d\_l \sin(\beta\_l \mathbf{x}) \tag{23}$$

through a modified Prony method. We present our method in the following theorem.

**Theorem 1.** *Assume that a function f*(*x*) *has the two-generator sparse expansion form of* (23)*, where the number of terms for two generators M*<sup>1</sup> *and M*<sup>2</sup> *are known, but the two sets of coefficients in* {*c*1, ... , *cM*<sup>1</sup> } *and* {*d*1, ... , *dM*<sup>2</sup> } *and the two sets of frequencies in* {*φ*1, ... , *φM*<sup>1</sup> } *and* {*β*1, ... , *βM*<sup>2</sup> } *are unknown. If* 4(*M*<sup>1</sup> + *M*2) − 1 *equispaced sampling values of the form f*(*x*<sup>0</sup> + *kh*) *for k* = −2(*M*<sup>1</sup> + *M*2) + 1, ... , −1, 0, 1, ... , 2(*M*<sup>1</sup> + *M*2) − 1 *are provided, then the original function f*(*x*) *can be uniquely reconstructed under the following conditions:*

	- <sup>2</sup> *All the frequencies* {*φ*1, ... , *<sup>φ</sup>M*<sup>1</sup> , *<sup>β</sup>*1, ... , *<sup>β</sup>M*<sup>2</sup> } *are distinct in a range* [0, *<sup>K</sup>*) <sup>⊂</sup> <sup>R</sup> *for some <sup>K</sup>* <sup>&</sup>gt; <sup>0</sup>*. Furthermore, h is selected from the range* <sup>0</sup> <sup>&</sup>lt; *<sup>h</sup>* <sup>&</sup>lt; *<sup>π</sup> K.*

**Proof.** First, we choose an appropriate linear operator, such that our two generating functions cos(*φx*) and sin(*βx*) in (23) are both the eigenfunctions of this operator. We consider the *symmetric shift operator* (see [3])

$$\mathcal{S}\_{h\_{\mathbf{s}}-h}f(\mathbf{x}) := \left(\frac{\mathcal{S}\_{-h}+\mathcal{S}\_{h}}{2}\right)f(\mathbf{x}) = \frac{f(\mathbf{x}-h) + f(\mathbf{x}+h)}{2}.\tag{24}$$

When we apply this operator on cos(*φx*) and sin(*βx*), we obtain

$$\begin{aligned} (\mathcal{S}\_{h,-h})\cos(\phi \mathbf{x}) &= \cos(\phi h)\cos(\phi \mathbf{x}), \\ (\mathcal{S}\_{h,-h})\sin(\beta \mathbf{x}) &= \cos(\beta h)\sin(\beta \mathbf{x}), \end{aligned} \tag{25}$$

where cos(*φh*) and cos(*βh*) are the eigenvalues. Now we define the Prony polynomial for problem (23) using all the eigenvalues {cos(*φjh*)}*M*<sup>1</sup> *<sup>j</sup>*=<sup>1</sup> and {cos(*βlh*)}*M*<sup>2</sup> *<sup>l</sup>*=<sup>1</sup> as follows:

$$\Lambda(z) = \prod\_{j=1}^{M\_1} (z - \cos(h\phi\_j)) \prod\_{l=1}^{M\_2} (z - \cos(h\beta\_l)),\tag{26}$$

which can be written in terms of the Chebyshev polynomials as

$$\Lambda(z) = \sum\_{k=0}^{M\_1 + M\_2} \lambda\_k T\_k(z),\tag{27}$$

where *Tk*(*z*) := cos(*k* cos−1(*z*)). (See [3] for more information on this technique.) Since the leading coefficient of the Chebyshev polynomial *<sup>T</sup>k*(*z*) is 2*k*−1, we choose *<sup>λ</sup>M*1+*M*<sup>2</sup> <sup>=</sup> 21−(*M*1+*M*2), so that Λ(*z*) in (27) has the leading coefficient 1. This Prony polynomial has the following critical property:

$$\sum\_{k=0}^{M\_1+M\_2} \lambda\_k T\_k(\cos(\phi\_{\vec{\gamma}} h)) = 0 \quad \text{and} \quad \sum\_{k=0}^{M\_1+M\_2} \lambda\_k T\_k(\cos(\beta\_l h)) = 0$$

for *j* = 1, 2, ... , *M*<sup>1</sup> and *l* = 1, 2, ... , *M*2, respectively, which is essential to help us derive the following linear system.

To derive a linear system for {*λk*}*M*1+*M*2−<sup>1</sup> *<sup>k</sup>*=<sup>0</sup> , we need to calculate the following expression

$$\sum\_{k=0}^{M\_1+M\_2} \lambda\_k \left( \mathcal{S}\_{kh\_r-kh} \mathcal{S}\_{mh\_s-mh} f(x\_0) \right),$$

which can be shown to be zero. That is,

$$\frac{1}{4} \frac{M\_1 + M\_2}{4} \lambda\_k \left( f(\mathbf{x}\_0 + (m+k)h) + f(\mathbf{x}\_0 - (m+k)h) + f(\mathbf{x}\_0 + (m-k)h) + f(\mathbf{x}\_0 - (m-k)h) \right) = 0 \tag{28}$$

for *m* = 0, 1, ... , *M*<sup>1</sup> + *M*<sup>2</sup> − 1. Indeed, using the right-hand-side expression in (23) for *f*(*x*) in (28) and for a fixed *m* ∈ {0, 1, . . . , *M*<sup>1</sup> + *M*<sup>2</sup> − 1}, we obtaining

$$\begin{split} & \frac{1}{4} \sum\_{k=0}^{M\_1+M\_2} \lambda\_k \left[ \sum\_{j=1}^{M\_1} 2c\_j \left( \cos(\phi\_j(\mathbf{x}\_0 + mh)) + \cos(\phi\_j(\mathbf{x}\_0 - mh)) \right) \cos(\phi\_j kh) \right] \\ & + \frac{1}{4} \sum\_{k=0}^{M\_1+M\_2} \lambda\_k \left[ \sum\_{l=2}^{M\_2} 2d\_l \left( \sin(\beta\_l(\mathbf{x}\_0 + mh)) + \sin(\beta\_l(\mathbf{x}\_0 - mh)) \right) \cos(\beta\_l kh) \right] \\ & = \sum\_{j=1}^{M\_1} c\_j \cos(\phi\_j \mathbf{x}\_0) \cos(\phi\_j mh) \left( \sum\_{k=0}^{M\_1+M\_2} \lambda\_k \cos(\phi\_j kh) \right) + \sum\_{l=1}^{M\_2} d\_l \sin(\beta\_l \mathbf{x}\_0) \cos(\beta\_l mh) \left( \sum\_{k=0}^{M\_1+M\_2} \lambda\_k \cos \beta\_l (lhm) \right) \\ & = \sum\_{j=1}^{M\_1} c\_j \cos(\phi\_j x\_0) \cos(\phi\_j mh) \left( \underbrace{\sum\_{k=0}^{M\_1+M\_2} \lambda\_k T\_k (\cos(\phi\_j h))}\_{=0} \right) + \sum\_{l=1}^{M\_2} d\_l \sin(\beta\_l x\_0) \cos(\beta\_l mh) \left( \underbrace{\sum\_{k=0}^{M\_1+M\_2} \lambda\_k T\_k (\cos(\beta\_l h))}\_{=0} \right) \\ & = 0. \end{split}$$

We can reformulate the system (28) as

$$\sum\_{k=0}^{\left\lfloor (M\_1+M\_2)-1 \right\rfloor} \lambda\_k \left( f(\mathbf{x}\_0 + (m+k)h) + f(\mathbf{x}\_0 - (m+k)h) + f(\mathbf{x}\_0 + (m-k)h) + f(\mathbf{x}\_0 - (m-k)h) \right)$$

$$= -2^{1-(M\_1+M\_2)} \left( f(\mathbf{x}\_0 + ((M\_1+M\_2)+m)h) + f(\mathbf{x}\_0 - ((M\_1+M\_2)+m)h) \right. \tag{29}$$

$$+ f(\mathbf{x}\_0 + ((M\_1+M\_2)-m)h) + f(\mathbf{x}\_0 - ((M\_1+M\_2)-m)h) \Big)$$

for *m* = 0, 1, ... , *M*<sup>1</sup> + *M*<sup>2</sup> − 1. To solve this system, we need 4(*M*<sup>1</sup> + *M*2) − 1 sampling values in the form of *f*(*x*<sup>0</sup> + *kh*) for *k* = −2(*M*<sup>1</sup> + *M*2) +1, ... , −1, 0, 1, ... , 2(*M*<sup>1</sup> + *M*2) −1.

In order to see that the linear system in (29) has a unique solution, we study the (*M*<sup>1</sup> <sup>+</sup> *<sup>M</sup>*2) <sup>×</sup> (*M*<sup>1</sup> <sup>+</sup> *<sup>M</sup>*2) coefficient matrix in (29), which we denote as *<sup>H</sup>*. As in the classical Prony method, we can factorize *H* in the following structure

$$\begin{split} H &:= \left[ f(\mathbf{x}\_{0} + (m+k)h) + f(\mathbf{x}\_{0} - (m+k)h) + f(\mathbf{x}\_{0} + (m-k)h) + f(\mathbf{x}\_{0} - (m-k)h) \right]\_{m,k=0}^{(M\_{1}+M\_{2})-1} \\ &= 4 \left[ \sum\_{j=1}^{M\_{1}} c\_{j} \cos(\phi\_{j} \mathbf{x}\_{0}) \cos(\phi\_{j} mh) \cos(\phi\_{j} k h) + \sum\_{l=1}^{M\_{2}} d\_{l} \sin(\beta\_{l} \mathbf{x}\_{0}) \cos(\beta\_{l} mh) \cos(\beta\_{l} k h) \right]\_{m,k=0}^{(M\_{1}+M\_{2})-1} \\ &= 4V\_{h} \mathcal{D} V\_{h}^{T}, \end{split}$$

where the Vandermonde Block matrix *V<sup>h</sup>* can be written as

$$\mathcal{V}\_h := \begin{bmatrix} A & \mid \ B \ \end{bmatrix}\_{\prime} \tag{30}$$

with

$$A := \begin{bmatrix} 1 & \dots & 1 \\ & T\_1(\cos \phi\_1 h) & \dots & T\_1(\cos \phi\_{M\_1} h) \\ & \vdots & \dots & \vdots \\ T\_{(M\_1 + M\_2) - 1}(\cos \phi\_1 h) & \dots & T\_{(M\_1 + M\_2) - 1}(\cos \phi\_{M\_1} h) \end{bmatrix}\_{(M\_1 + M\_2) \times M\_1} \tag{31}$$

and

$$\mathcal{B} := \begin{bmatrix} 1 & \dots & 1 \\ & T\_1(\cos \beta\_1 h) & \dots & T\_1(\cos \beta\_{M\_2} h) \\ & \vdots & & \vdots \\ T\_{(M\_1 + M\_2) - 1}(\cos \beta\_1 h) & \dots & T\_{(M\_1 + M\_2) - 1}(\cos \beta\_{M\_2} h) \end{bmatrix}\_{(M\_1 + M\_2) \times M\_2},\tag{32}$$

and the diagonal block matrix *D* can be written as

$$D := \left[\begin{array}{c} D1 \\ \hline \mathbf{0} \end{array}\right]\_{D2} \tag{33}$$

where

$$D\mathbf{1} := \begin{bmatrix} c\_1 \cos(\phi\_1 \mathbf{x}\_0) \\ & \ddots \\ & & c\_{M\_1} \cos(\phi\_{M\_1} \mathbf{x}\_0) \end{bmatrix} \tag{34}$$

and

$$\mathbf{D2} := \begin{bmatrix} d\_1 \sin(\beta\_1 \mathbf{x}\_0) \\ & \ddots \\ & & d\_{M\_2} \sin(\beta\_{M\_2} \mathbf{x}\_0) \end{bmatrix} . \tag{35}$$

Thus, *H* is guaranteed to be invertible by the conditions 2◦ and 3◦ of the theorem. Then, we can find the unique solution for {*λk*}*M*1+*M*2−<sup>1</sup> *<sup>k</sup>*=<sup>0</sup> from the linear system (29).

With these *λ<sup>k</sup>* values for Λ(*z*) as in (26), we can determine *φj*'s and *βl*'s from the zeros of Λ(*z*); however, this step is non-trivial, because we do not know what zeros correspond to *φj*'s and what zeros correspond to *βl*'s. In order to resolve this ambiguity, we consider all the possible cases: Among *M*<sup>1</sup> + *M*<sup>2</sup> zeros of Λ(*z*), *M*<sup>1</sup> of them correspond to *φj*'s. Thus, there are a total ( *M*1+*M*<sup>2</sup> *<sup>M</sup>*<sup>1</sup> ) possible choices for *<sup>φ</sup>j*'s, among which there is exactly one choice for the solution; however, how do we select the right one? We need to go to the next *overdetermined* linear system for the answer.

When we determine the coefficients *cj*'s and *dl*'s in (23), we have the following linear system

$$f(\mathbf{x}\_0 + hn) = \sum\_{j=1}^{M\_1} c\_j \cos(\phi\_j(\mathbf{x}\_0 + hn)) + \sum\_{l=1}^{M\_2} d\_l \sin(\beta\_l(\mathbf{x}\_0 + hn))\tag{36}$$

for *n* = −2(*M*<sup>1</sup> + *M*2) + 1, ... , −1, 0, ..., 2(*M*<sup>1</sup> + *M*2) − 1 corresponding to all the sampling values, which has 4(*M*<sup>1</sup> + *M*2) − 1 equations and *M*<sup>1</sup> + *M*<sup>2</sup> unknowns. This *overdetermined* linear system gives us the extra information we need to select the true-solution case from the remaining non-solution cases.

Our method is based on an observation: The sampling values { *f*(*x*<sup>0</sup> + *nh*)} 2(*M*1+*M*2)−1 *n*=−2(*M*1+*M*2)+1 are calculated using the original *φj*'s and *βl*'s (corresponding to the true-solution case), which means that all the 4(*M*<sup>1</sup> + *M*2) − 1 equations in (36) are completely satisfied for the true-solution case. In other words, the least-square solution of (36) for the true-solution case should have this property: Its error term is zero *theoretically* (or very close to zero due to rounding errors in computation). While the least-square solution for any non-solution case would have a *significant* (with respect to the rounding errors) nonzero error term, which makes the true solution stand out clearly.

Our experiments have verified this phenomenon. Based on this observation, we develop a *two-stage least-square detection* method to minimize the computing cost, and in Section 5, we demonstrate the effectiveness of this method using a simple example.

**Remark 1.** *The overdetermined linear system (36) plays an important role in determining the true solution from certain number of possible cases. Typically, this situation happens in the multigenerator sparse expansion problem. For the single-generator case, we can select same number of linearly independent equations from the overdetermined system as the number of unknowns to find the solution; however, for the multi-generator case, the redundant equations are very useful in the least-square method.*

#### **4. The Sparse Expansion Problem with Two Gaussian Generators**

In this section, we solve another two-generator sparse expansion problem as in (7) that uses the two Gaussian generating functions, *e*−*β*(*φ*−*x*)<sup>2</sup> and *e*−*β*(*φ*+*x*)<sup>2</sup> , in the form of

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j - \mathbf{x})^2} + \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j + \mathbf{x})^2} \tag{37}$$

for some constant *<sup>β</sup>* <sup>∈</sup> <sup>C</sup>\{0}. In order to recover the coefficients *cj* <sup>∈</sup> <sup>C</sup>\{0} and the parameters *<sup>φ</sup>j*'s, we need 4*<sup>M</sup>* sampling values *<sup>f</sup>*(*x*<sup>0</sup> <sup>+</sup> *kh*), *<sup>k</sup>* <sup>=</sup> 0, ..., 4*<sup>M</sup>* <sup>−</sup> 1, where *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>R</sup>, and *h* satisfies the same condition as in Section 2.2.

This two-generator sparse expansion problem has a special property: When *φj*<sup>0</sup> = 0 for some *j*<sup>0</sup> ∈ {1, ..., *M*}, the two functions *e* −*β*(*φ<sup>j</sup>* 0−*x*)<sup>2</sup> and *e* −*β*(*φ<sup>j</sup>* 0+*x*)<sup>2</sup> are the same. This property would cause some problem for our method presented in the previous section. In order to make the discussion easier, we separate these two cases, and consider the case that *<sup>φ</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>\{0} for all *<sup>j</sup>* <sup>=</sup> 1, ..., *<sup>M</sup>* first.

**Theorem 2.** *Assume that a function f*(*x*) *has the two-generator sparse expansion form of (37), where the number of terms <sup>M</sup> and the constant <sup>β</sup>* <sup>∈</sup> <sup>C</sup>\{0} *are known, but the coefficients in* {*c*1, ... , *cM*} *and the parameters in* {*φ*1, ... , *φM*} *are unknown. If* 4*M equispaced sampling values of the form f*(*x*<sup>0</sup> + *kh*) *for k* = 0, 1, ... , 4*M* − 1 *are provided, then the original function f*(*x*) *can be uniquely reconstructed under the following conditions:*


**Proof.** Our method relies on existence of some *critical linear operator*, such that both generating functions are its eigenfunctions. Here we use the operator S*K*,*<sup>h</sup>* as defined in (16) with *K*(*x*, *h*) := *eβh*(2*x*+*h*), which has the following properties:

$$\begin{aligned} (\mathcal{S}\_{\mathbf{K}, \mathrm{li}} e^{-\beta(\boldsymbol{\phi} - \cdot \boldsymbol{\cdot})^2})(\mathbf{x}) &= e^{2\beta \boldsymbol{\Phi} \mathrm{li}} e^{-\beta(\boldsymbol{\phi} - \mathbf{x})^2}, \\ (\mathcal{S}\_{\mathbf{K}, \mathrm{li}} e^{-\beta(\boldsymbol{\phi} + \cdot \boldsymbol{\cdot})^2})(\mathbf{x}) &= e^{-2\beta \boldsymbol{\Phi} \mathrm{li}} e^{-\beta(\boldsymbol{\phi} + \cdot \mathbf{x})^2}. \end{aligned} \tag{38}$$

Clearly *e* <sup>−</sup>*β*(*φj*−·)<sup>2</sup> and *e* <sup>−</sup>*β*(*φj*+·)<sup>2</sup> are eigenfunctions of <sup>S</sup>*K*,*<sup>h</sup>* for all *<sup>φ</sup><sup>j</sup>* <sup>∈</sup> <sup>R</sup>\{0} with corresponding eigenvalues *e* <sup>2</sup>*βφjh* and *e* <sup>−</sup>2*βφjh*, respectively, for *j* = 1, ..., *M*. Hence we can define the Prony polynomial using all these eigenvalues:

$$\Lambda(z) = \prod\_{j=1}^{M} (z - e^{2h\beta\phi\_j}) \prod\_{j=1}^{M} (z - e^{-2h\beta\phi\_j}) = \sum\_{l=0}^{2M} \lambda\_l z^l \tag{39}$$

with *λ*2*<sup>M</sup>* = 1. Since the real number *φ<sup>j</sup>* = 0, we can assume that *φ<sup>j</sup>* > 0 for all *j* = 1, ..., *M* based on the structure in (37) to improve the certainty without loss of generality.

Then for *m* = 0, 1, ..., 2*M* − 1, we calculate

$$\begin{split} &\sum\_{l=0}^{2M} \lambda\_l (\mathcal{S}\_{\mathbb{K}, (l+m)h} f)(x\_0) = \sum\_{l=0}^{2M} \lambda\_l e^{\Re(l+m)(2x\_0 + h(l+m))} f(x\_0 + h(l+m)) \\ &= \sum\_{l=0}^{2M} \lambda\_l e^{\Re(l+m)(2x\_0 + h(l+m))} \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j - (x\_0 + h(l+m)))^2} + \sum\_{l=0}^{2M} \lambda\_l e^{\Re(l+m)(2x\_0 + h(l+m))} \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j + (x\_0 + h(l+m)))^2} \\ &= \left( \sum\_{j=1}^{M} c\_j e^{-\beta(x\_0 + hm - \phi\_j)^2} e^{\Re(m(2x\_0 + hm))} \right) \underbrace{\left( \sum\_{l=0}^{2M} \lambda\_l e^{2\Re l\phi\_l} \right)}\_{=0} + \left( \sum\_{j=1}^{M} c\_j e^{-\beta(x\_0 + hm + \phi\_j)^2} e^{\Re(m(2x\_0 + hm))} \right) \underbrace{\left( \sum\_{l=0}^{2M} \lambda\_l e^{-2\Re l\phi\_l} \right)}\_{=0} = 0, \end{split}$$

which can be written as the following linear system

$$\sum\_{l=0}^{2M-1} \lambda\_l e^{\theta l(l+m)(2x\_0 + h(l+m))} f(\mathbf{x}\_0 + h(l+m)) = -e^{\theta h(m+2M)(2x\_0 + h(m+2M))} f(\mathbf{x}\_0 + h(m+2M)) \tag{40}$$

for *m* = 0, 1, ..., 2*M* − 1. To solve this system, we need 4*M* sampling values: *f*(*x*<sup>0</sup> + *kh*) for = 0, 1, ... , 4*M* − 1. To study existence of the solution for this linear system, we would like to simplify it with respect to the unknown vector *λ* := [*λ*0, ..., *λ*2*M*−1] *<sup>T</sup>* as follows,

$$H\lambda = -\mathbf{G}\_{\prime} \tag{41}$$

$$\text{with } \mathbf{G} := \left[ \left( \mathcal{S}\_{\mathbf{K}, (M+m)\mathbf{h}} f \right) (\mathbf{x}\_0) \right]\_{m=0}^{2M-1} \text{ and } \mathbf{z}$$

$$H := \left[ \left( \mathcal{S}\_{K, (l+m)h} f \right) (x\_0) \right]\_{l, m=0}^{2M-1}. \tag{42}$$

The invertibility of *H* can be seen from the following matrix factorization:

$$\begin{aligned} &H = \left[K(x\_0, h(l+m))f(x\_0 + h(l+m))\right]\_{l,m=0}^{2M-1} \\ &= \left[\sum\_{j=1}^M c\_j e^{\beta h(l+m)(2x\_0 + h(l+m))} e^{-\beta(\theta\_j - (x\_0 + h(l+m)))^2} + \sum\_{j=1}^M c\_j e^{\beta h(l+m)(2x\_0 + h(l+m))} e^{-\beta(\theta\_j + (x\_0 + h(l+m)))^2}\right]\_{l,m=0}^{2M-1} \\ &= \left[\sum\_{j=1}^M c\_j e^{-\beta(\theta\_j - x\_0)^2} e^{2\beta h(l+m)\theta\_j} + \sum\_{j=1}^M c\_j e^{-\beta(\theta\_j + x\_0)^2} e^{-2\beta h(l+m)\theta\_j}\right]\_{l,m=0}^{2M-1} \\ &= \mathbf{V}\_h \text{diag}\left(c\_j e^{-\beta(\theta\_j - x\_0)^2} + c\_j e^{-\beta(\theta\_j + x\_0)^2}\right) \mathbf{V}\_h^T \\ &= \mathbf{V}\_h \mathbf{D} \mathbf{V}\_h^T \end{aligned} \tag{43}$$

where the Vandermonde block matrix *V<sup>h</sup>* has the following form

$$\mathbf{V}\_h := \begin{bmatrix} \mathbf{A} & \mathbf{B} \end{bmatrix} \tag{44}$$

with

$$A := \begin{bmatrix} 1 & \dots & 1 \\ \ & \varepsilon^{2\beta h\phi\_1} & \dots & \varepsilon^{2\beta h\phi\_M} \\ \vdots & \dots & \vdots \\ \varepsilon^{2(2M-1)\beta h\phi\_1} & \dots & \varepsilon^{2(2M-1)\beta h\phi\_M} \end{bmatrix}\_{(2M)\times M} \tag{45}$$

and

$$\mathcal{B} := \begin{bmatrix} 1 & \dots & 1 \\ \ e^{-2\beta \hbar \phi\_1} & \dots & e^{-2\beta \hbar \phi\_M} \\ \vdots & \dots & \vdots \\ e^{-2(2M-1)\beta \hbar \phi\_1} & \dots & e^{-2(2M-1)\beta \hbar \phi\_M} \end{bmatrix}\_{(2M)\times M} \tag{46}$$

and the diagonal block matrix *D* is given by

$$D \coloneqq \left[\begin{array}{c} D1 \mid \mathbf{0} \\ \hline \mathbf{0} \mid D2 \end{array}\right] \tag{47}$$

with

$$D\mathbf{1} := \begin{bmatrix} c\_1 e^{\beta(\phi\_1 - x\_0)^2} \\ & \ddots \\ & & c\_M e^{\beta(\phi\_M - x\_0)^2} \end{bmatrix} \tag{48}$$

and

$$\mathbf{D2} := \begin{bmatrix} c\_1 e^{\mathfrak{f}(\phi\_1 + \chi\_0)^2} \\ & \ddots \\ & & c\_M e^{\mathfrak{f}(\phi\_M + \chi\_0)^2} \end{bmatrix} . \tag{49}$$

From this structure, we can see that the Vandermonde matrix **V***<sup>h</sup>* in (44) is invertible by conditions 2◦ and 3◦ of the theorem, and hence *H* in (42) is also invertible by condition 1◦, which results in the unique solution for *λ*.

With all the *λ<sup>l</sup>* values found from the above linear system, we can find all the *φ<sup>j</sup>* values by calculating the zeros of the Prony polynomial of (39). In this case, we do not need to deal with the ambiguity that we encountered in the previous section due to the special structure of the pairs (*φj*, −*φj*)'s. Finally, the coefficients *cj*'s of the sparse expansion (37) can be computed by solving the following *overdetermined* linear system:

$$f(\mathbf{x}\_0 + lh) = \sum\_{j=1}^{M} \mathbf{c}\_j \left( \mathbf{e}^{-\beta(\Phi/\star + \mathbf{x}\_0 - lh)^2} + \mathbf{e}^{-\beta(\Phi/\star + \mathbf{x}\_0 + lh)^2} \right) \tag{50}$$

for *l* = 0, ..., 4*M* − 1.

**Remark 2.** *Our method above only works for the case when φ<sup>j</sup>* = 0 *for all j in* {1, 2, ... , *M*}*; however, in the real-world situation, when we solve a problem of* (37) *using* 4*M sampling values, how do we know if there exists any φ<sup>j</sup>* = 0 *in it or not? We need a detection method to tell us if all the φj's are nonzero before we apply the above method.*

Let us investigate the existence of a solution for the linear system (41), which is determined by the invertibility of *H* in (42). We notice that when *φ*<sup>1</sup> = 0, the first column of (45) and the first column of (46) are the same, which causes the matrix *V<sup>h</sup>* in (44) to be singular. Then, we conclude that *H* in (43) is singular if any *φ<sup>j</sup>* = 0. In other words, by checking the invertibility of *H*, we can tell if there is any *φ<sup>j</sup>* = 0 for problem (37). If *H* in (42) is singular, our current method does not work. Fortunately we can modify our method to solve the problem for this special situation.

Let us assume that *φ*<sup>0</sup> = 0, and the remaining *φj*'s are positive numbers. In this case, we modify (37) to

$$f(\mathbf{x}) = c\_0 \mathbf{e}^{-\beta \mathbf{x}^2} + \sum\_{j=1}^{M} c\_j \mathbf{e}^{-\beta(\phi\_j - \mathbf{x})^2} + \sum\_{j=1}^{M} c\_j \mathbf{e}^{-\beta(\phi\_j + \mathbf{x})^2},\tag{51}$$

and its corresponding Prony polynomial is defined as

$$\Lambda(z) = (z - 1)\prod\_{j=1}^{M} (z - e^{2h\beta\phi\_j}) \prod\_{j=1}^{M} (z - e^{-2h\beta\phi\_j}) = \sum\_{l=0}^{2M+1} \lambda\_l z^l \tag{52}$$

with *λ*2*M*+<sup>1</sup> = 1. Since Λ(1) = 0, it leads to

$$\sum\_{I=0}^{2M+1} \lambda\_I = 0.\tag{53}$$

Then we can show that

$$\sum\_{l=0}^{2M+1} \lambda\_l (\mathcal{S}\_{\mathcal{K}, (l+m)h} f)(\mathbf{x}\_0) = 0, \quad \text{for } m = 0, 1, \dots, 2M,\tag{54}$$

because we can split the above left-hand-side summation into the following three summations with zero value each:

$$\sum\_{l=0}^{2M+1} \lambda\_l e^{\beta h(l+m)(2\eta\_0 + h(l+m))} c\_0 e^{-\beta(\eta\_0 + h(l+m))^2} = c\_0 e^{-\beta \mu\_0^2} \underbrace{\sum\_{l=0}^{2M+1} \lambda\_l}\_{=0} = 0,$$

$$\sum\_{l=0}^{2M+1} \lambda\_l e^{\beta h(l+m)(2\eta\_0 + h(l+m))} \sum\_{j=1}^M c\_j e^{-\beta(\phi\_l - (\eta\_0 + h(l+m)))^2} = \left(\sum\_{j=1}^M c\_j e^{-\beta(\eta\_0 + hm - \phi\_j)^2} e^{\beta l m(2\eta\_0 + hm)}\right) \underbrace{\left(\sum\_{l=0}^{2M+1} \lambda\_l e^{2\beta h \phi\_l}\right)}\_{=0} = 0,$$

and

$$\sum\_{l=0}^{2M+1} \lambda\_l \varepsilon^{\sharp h(l+m)(2\upsilon\_0 + h(l+m))} \sum\_{j=1}^M c\_j \varepsilon^{-\beta(\varrho\_j + (\upsilon\_0 + h(l+m)))^2} = \left(\sum\_{j=1}^M c\_j \varepsilon^{-\beta(\upsilon\_0 + h\upsilon + \phi\_j)^2} \varepsilon^{\sharp hm(2\upsilon\_0 + hm)}\right) \underbrace{\left(\sum\_{l=0}^{2M+1} \lambda\_l \varepsilon^{-2\beta hl\phi\_j}\right)}\_{=0} = 0.5 \cdot \lambda\_l \varepsilon^{-\beta(\upsilon\_0 + h(l+m))} = 0.5 \cdot \lambda\_l \varepsilon^{-\beta(\upsilon\_0 + h(l+m))}$$

The linear system (54) for *λ* := [*λ*0, ..., *λ*2*M*] *<sup>T</sup>* can be written as

$$H\lambda = -\mathbf{G}\_{\prime} \tag{55}$$

$$\text{with } \mathbf{G} := \left[ (\mathcal{S}\_{\mathbf{K}, (2M+m+1)h} f)(x\_0) \right]\_{m=0}^{2M} \text{ and } \mathbf{d}$$

$$H := \left[ \left( \mathcal{S}\_{\mathbb{K}, (l+m)h} f \right) (\mathbf{x}\_0) \right]\_{l, m=0}^{2M}. \tag{56}$$

We use (4*M* + 2) sampling values: *f*(*x*<sup>0</sup> + *kh*) for *k* = 0, 1, ... , 4*M* + 1 to solve the system. Similar to (43), we still have

$$H = V\_h D V\_{h'}^T$$

but we need to modify *V<sup>h</sup>* to


which is invertible for positive distinct {*φ*1, ... , *φM*} ⊂ (0, *L*), and the diagonal block matrix *D* becomes ⎤

$$D = \begin{bmatrix} \frac{c\_0 e^{-\beta x\_0^2}}{\mathbf{0}} & \mathbf{0} & \mathbf{0} \\ \hline \mathbf{0} & D1 & \mathbf{0} \\ \hline \mathbf{0} & \mathbf{0} & D2 \end{bmatrix}$$

with *D***1** and *D***2** maintaining the same forms of (48) and (49), respectively.

After we solve the linear system of (55), we obtain the Prony polynomial that contains one zero at *z* = 1 and the remaining zeros appear in pairs of (*zj*, *z*−<sup>1</sup> *<sup>j</sup>* )'s, which correspond to the parameter values 0 and (*φj*, −*φj*) pairs. Finally, we will solve the following overdetermined linear system for *c*0, *c*1,..., *cM* values

$$f(\mathbf{x}\_0 + l\mathbf{h}) = \mathbf{c}\_0 \mathbf{e}^{-\beta(\mathbf{x}\_0 - l\mathbf{h})^2} + \sum\_{j=1}^{M} \mathbf{c}\_j \left( \mathbf{e}^{-\beta(\phi\_j + \mathbf{x}\_0 - l\mathbf{h})^2} + \mathbf{e}^{-\beta(\phi\_j + \mathbf{x}\_0 + l\mathbf{h})^2} \right) \tag{57}$$

for *l* = 0, ..., 4*M* + 1. From this example, we can see that the value of det(*H*) can give us some important information, that is, which of the two systems in (37) and (51) we should work on. This property could be useful when we consider a problem in which the *M* value in (51) is unknown, but restricted in certain range. (See discussion in Section 6).

#### **5. Numerical Experiments**

In this section, we use two simple examples to illustrate the implementation details of our method for the two-generator sparse expansion problem described in the previous sections. The first example is for version (23) in Section 3. The second example is for version (37) in Section 4.

**Example 1.** *We consider a function f*(*x*) *(see Figure 1) that is a two-generator expansion with each generator producing* 5 *terms in the following form*

$$f(\mathbf{x}) = \sum\_{j=1}^{5} c\_j \cos(\phi\_j \mathbf{x}) + \sum\_{j=1}^{5} d\_j \sin(\beta\_j \mathbf{x}),\tag{58}$$

*and the* 20 *parameters we used are listed in the table below to generate the sampling values.*

*How to use the* 39 *equispaced sampling values (where* 39 *comes from* 4 (5 + 5) − 1*) in the form of f*(*x*<sup>0</sup> + *kh*), *k* = −19, . . . , 0, . . . , 19 *to recover the original parameters in Table 1?*

**Table 1.** Original parameters of the function *f*(*x*) in (58).


**Figure 1.** The signal *f*(*x*) in (58) with 39 equispaced sampling values.

There are 20 original parameters in two sets: {*c*1, ... , *c*5, *φ*1, ... , *φ*5} and {*d*1, ... , *d*5, *β*1, ... , *β*5} corresponding to the two generators, respectively. To recover them, first we solve the following linear system for the coefficients of the Prony polynomial {*λ*0,..., *λ*9} based on the Equation (29)

$$H\lambda = -G\_{\prime}$$

where


*λ* = <sup>−</sup>0.0088 0.0275 <sup>−</sup>0.0639 0.1254 <sup>−</sup>0.2113 0.3180 <sup>−</sup>0.4300 0.5316 <sup>−</sup>0.6010 0.3135*<sup>T</sup>* , which corresponds to the following Prony polynomial

$$\Lambda(z) = z^{10} + 0.3135z^9 - 0.6010z^8 + 0.5316z^7 - 0.4300z^6 + 0.3180z^5 - 0.2113z^4 + 0.1254z^3 - -0.0639z^2 + 0.0275z - 0.0088.$$
 
$$\text{From the 10 zeros of this polynomial, we obtain 10 parameter values:}$$

{11.0000, 2.0000, 3.0000, 10.0000, 4.0000, 5.0000, 9.0000, 6.0000, 7.0000, 8.0000}, (59)

which correspond to {*φ*1, ... , *φ*5, *β*1, ... , *β*5}, but the explicit order is unknown. We must resolve the ambiguity: What five parameter values are for {*φ*1, ... , *φ*5} (with the remaining five parameter values for {*β*1,..., *β*5})?

To separate the *φj*'s from *βl*'s, we consider the following overdetermined linear system:

$$
\begin{bmatrix}
\cos(\phi\_1 \mathbf{x}\_0) & \cdots & \cos(\phi\_5 \mathbf{x}\_0) & \sin(\beta\_1 \mathbf{x}\_0) & \cdots & \sin(\beta\_5 \mathbf{x}\_0) \\
\cos(\phi\_1 \mathbf{x}\_1) & \cdots & \cos(\phi\_5 \mathbf{x}\_1) & \sin(\beta\_1 \mathbf{x}\_1) & \cdots & \sin(\beta\_5 \mathbf{x}\_1) \\
\vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\
\cos(\phi\_1 \mathbf{x}\_{19}) & \cdots & \cos(\phi\_5 \mathbf{x}\_{19}) & \sin(\beta\_1 \mathbf{x}\_{19}) & \cdots & \sin(\beta\_5 \mathbf{x}\_{19}) \\
\end{bmatrix}
\begin{bmatrix}
c\_1 \\ \vdots \\ c\_5 \\ d\_1 \\ \vdots \\ d\_5 \\ d\_5 \end{bmatrix} = \begin{bmatrix}
f\_0 \\ f\_1 \\ \vdots \\ f\_{19} \end{bmatrix},\tag{60}
$$

where we use the shorthand notations

$$\mathbf{x}\_n = \mathbf{x}\_0 + nh \quad \text{and} \quad f\_n = f(\mathbf{x}\_0 + nh).$$

for *n* = 0, 1, ... , 19. Note: In this linear system, we only use 20 out of 39 original sampling values, which is adequate for this particular example. It is a trade-off issue between the accuracy of computation and the cost of computation (in time). In general, the more redundant equations we use, the more accuracy we can achieve in searching for the true solution. In other words, if we can obtain adequate accuracy, we focus on cutting the computation cost to the minimum. We do not solve this overdetermined linear system by the *least-square* method directly. We split these 20 equations into two parts: In the first part, we approximate the coefficients {*c*1, ... , *c*5, *d*1, ... , *d*5} in (60) by the least-square method. Then we apply these derived coefficients to the equations in the second part so as to filter out the true solution.

Among the 10 values in (59), every time we select 5 of them for {*φ*1, ... , *φ*5}, the remaining 5 numbers are automatically for {*β*1, ... *β*5}. We will have total 252 possible choices (which is the combinatorial number ( 10 <sup>5</sup> )) as the candidates for the solution. Notice that this combinatorial number is a relatively big number. In order to speed up the processing, we reduce the redundant computation to the minimum. Let us use the notations {*φi* <sup>1</sup>, ... , *<sup>φ</sup><sup>i</sup>* <sup>5</sup>, *<sup>β</sup><sup>i</sup>* <sup>1</sup>, ... *<sup>β</sup><sup>i</sup>* <sup>5</sup>} with *i* = 1, 2, ... , 252 representing those 252 candidates. Our method is based on the property that the information given in the sampling values has a lot of redundancy for selecting the true solution, and we only use just enough information from the given sampling values so as to save the computation time.

First, when we calculate the coefficients {*c*1, ... , *c*5, *d*1, ... , *d*5} by the least-square method, we use exactly 10 equations (the same number of the coefficients) out of the 20 equations in (60). Based on our experiments, we do not have to use an overdetermined system for a good approximation by the least-square method. A determined system can give us excellent approximation for the least-square problem, while any underdetermined system usually does not approximate the data well through the least-square solution. For convenience, we select 10 consecutive equations in (60) somewhere in the middle, which we call the *least-square block* in our discussion, to approximate the coefficients {*c*1, ... , *c*5, *d*1, ... , *d*5}. Specifically, our least-square block takes the subscripts from 6 through 15, and the corresponding sampling values { *f*6, *f*7, ... , *f*15} should be selected as a reduced linear system given below,

$$
\begin{bmatrix}
\cos(\phi\_1 \mathbf{x}\_6) & \cdots & \cos(\phi\_5 \mathbf{x}\_6) & \sin(\beta\_1 \mathbf{x}\_6) & \cdots & \sin(\beta\_5 \mathbf{x}\_6) \\
\cos(\phi\_1 \mathbf{x}\_7) & \cdots & \cos(\phi\_5 \mathbf{x}\_7) & \sin(\beta\_1 \mathbf{x}\_7) & \cdots & \sin(\beta\_5 \mathbf{x}\_7) \\
\vdots & \vdots & \vdots & \cdots & \vdots & \vdots \\
\cos(\phi\_1 \mathbf{x}\_{15}) & \cdots & \cos(\phi\_5 \mathbf{x}\_{15}) & \sin(\beta\_1 \mathbf{x}\_{15}) & \cdots & \sin(\beta\_5 \mathbf{x}\_{15}) \\
\end{bmatrix}
\begin{bmatrix}
c\_1 \\ \vdots \\ c\_5 \\ d\_1 \\ \vdots \\ d\_5 \\ \end{bmatrix} = \begin{bmatrix}
f\_6 \\ f\_7 \\ \vdots \\ f\_{15} \\ f\_{15} \\ \end{bmatrix} \tag{61}
$$

Even if our new linear system (61) is a determined system, we still solve it for a leastsquare solution, because the determinant of the square matrix in (61) could be very close to zero. Then the remaining equations in (60) together with the coefficients derived from (61) will be used to detect which candidate is the true solution based on the error information.

For each set of values {*φ<sup>i</sup>* <sup>1</sup>, ... , *<sup>φ</sup><sup>i</sup>* <sup>5</sup>, *<sup>β</sup><sup>i</sup>* <sup>1</sup>, ... , *<sup>β</sup><sup>i</sup>* <sup>5</sup>} among the 252 candidates, the leastsquare solution for the linear system (61) would produce the 10 coefficients [*ci* <sup>1</sup>,..., *<sup>c</sup><sup>i</sup>* <sup>5</sup>, *<sup>d</sup><sup>i</sup>* <sup>1</sup>,..., *<sup>d</sup><sup>i</sup>* 5] *<sup>T</sup>*, and we evaluate the following vector

$$
\begin{bmatrix} f\_0^i \\ f\_1^i \\ \vdots \\ f\_{19}^i \end{bmatrix} := \begin{bmatrix} \cos(\phi\_1^i \mathbf{x}\_0) & \cdots & \cos(\phi\_5^i \mathbf{x}\_0) & \sin(\beta\_1^i \mathbf{x}\_0) & \cdots & \sin(\beta\_5^i \mathbf{x}\_0) \\ \cos(\phi\_1^i \mathbf{x}\_1) & \cdots & \cos(\phi\_5^i \mathbf{x}\_1) & \sin(\beta\_1^i \mathbf{x}\_1) & \cdots & \sin(\beta\_5^i \mathbf{x}\_1) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \cos(\phi\_1^i \mathbf{x}\_{19}) & \cdots & \cos(\phi\_5^i \mathbf{x}\_{19}) & \sin(\beta\_1^i \mathbf{x}\_{19}) & \cdots & \sin(\beta\_5^i \mathbf{x}\_{19}) \end{bmatrix} \begin{bmatrix} c\_1^i \\ \vdots \\ c\_5^i \\ d\_1^i \\ \vdots \\ d\_5^i \end{bmatrix},
$$

which is in general different from the original sampling vector [ *f*0, *f*1, ... , *f*19] *<sup>T</sup>*. Then we will calculate the difference of these two vectors, and see how close they are. We define the error vector as follows:

$$\begin{bmatrix} \mathfrak{e}\_0^i\\ \mathfrak{e}\_1^i\\ \vdots\\ \mathfrak{e}\_{19}^i \end{bmatrix} := \begin{bmatrix} |f\_0^i - f\_0|\\ |f\_1^i - f\_1|\\ \vdots\\ |f\_{19}^i - f\_{19}| \end{bmatrix}. \tag{62}$$

To search for the true solution among the 252 candidates, we discover an intrinsic property, shown in Figures 2 and 3, that can clearly separate the true solution from other candidates.

In Figure 2, we plot the error vector for one of the 252 candidates to view its typical behavior. The error values in the least-square block (with subscripts from 6 to 15) are very close to zero for a typical candidate; however, the error values that are out of the least-square block (with subscripts from 0 to 5 and from 16 to 19) are not close to zero in general for a candidate that is not the true solution.

This behavior can be explained in this way: The errors in the least-square block are usually very small due to the fact that the least-square solution of the determined system approximates the targeting sampling values { *f*6, *f*7, ... , *f*15} quite well; however, when we consider an error for a sampling value out of the least-square block, since the corresponding equation is not involved in the least-square approximation, there is no reason for this equation to generate a value that is very close to the targeting sampling value.

While for the true solution case, the behavior is different in the sense that the errors for all the equations in the linear system (60) are very close to zero (see Figure 3, and ignore the two reference points at the ends). Let us summarize the key property that helps us to find out the true solution among all the candidates: *For a candidate, if the coefficients generated from the determined linear system (61) by the least-square method cannot approximate just one sampling value out of the least-square block well, then it cannot be the true solution*.

However, if the coefficients for one candidate can approximate one particular sampling value out of the least-square block well, we can only say that it is *highly likely* that this candidate could be the true solution, because the probability for a *non-solution* candidate to approximate some sampling value out of the least-square block well is very small. Based on this observation from our experiments, we design the following strategy for the solution search.

*Strategy*: Eliminate as many as possible candidates in the first round filtering in two steps: *Step 1*. Select a determined linear system from the overdetermined linear system in (60) (as the least-square block), and approximate the coefficients {*c*1, ... , *c*5, *d*1, ... , *d*5} by the least-square method for each of the 252 candidates. *Step 2*. Apply the derived coefficients in *Step 1* on one of the linear equations out of the least-square block to approximate the targeting sampling value and calculate the error with the targeting sampling value. If the error is greater than certain threshold (we use 0.1 as our threshold), we drop this candidate from the consideration; otherwise, this candidate passes the first round filtering. If only one candidate survives the first round filtering, it must be the true solution. If more than one candidates pass the first round filtering, we need to do the second round filtering. In the second round filtering, we simply apply the derived coefficients on another linear equation out of the least-square block, and calculate the error for the targeting sampling value. If the error is greater than the threshold, we eliminate this candidate. We keep doing these cycles until we identify the true solution. Since we have plenty of redundant equations out of the least-square block, we should be able to determine the true solution without going through too many cycles in general. Furthermore, those linear equations corresponding to the original sampling values that are not included in the linear system (60) can still be used for the above steps when necessary, but the probability to use those equations out of the linear system (60) will be extremely small. This simple strategy is designed to allow us to detect the true solution without unnecessary computation, while we still preserve the option to use the redundant information when necessary.

**Figure 2.** Display the error vector for one of the 252 candidates.

**Figure 3.** Display the error vector for the true solution with two reference points at the ends.

Here we would like to point out that as soon as we select values in *φ*-group or *β*-group, the order of those values in each group is not important, because their corresponding coefficients (*cj*'s or *dl*'s) will also be aligned with them accordingly when we solve the determined linear system (61) using the least-square method.

**Example 2.** *Our second function to be recovered has the following form*

$$\begin{split} f(\omega) &= \frac{c\_1}{2} \left( e^{-\frac{1}{2}(\phi\_1 - \omega)^2} + e^{-\frac{1}{2}(\phi\_1 + \omega)^2} \right) \\ &+ \frac{c\_2}{2} \left( e^{-\frac{1}{2}(\phi\_2 - \omega)^2} + e^{-\frac{1}{2}(\phi\_2 + \omega)^2} \right) \\ &+ \frac{c\_3}{2} \left( e^{-\frac{1}{2}(\phi\_3 - \omega)^2} + e^{-\frac{1}{2}(\phi\_3 + \omega)^2} \right) \end{split} \tag{63}$$

*which is derived by applying the STFT on the following function*

$$\log(\mathbf{x}) = \sum\_{j=1}^{3} c\_j \cos(\phi\_j \mathbf{x}),\tag{64}$$

*with the parameters of (64) listed in the following Table 2.*

**Table 2.** Parameters of the function *f*(*x*) in (64).


To solve this problem, we need to use 12 (i.e., 4*M*) sampling values. After we applied the method described in Section 4, we solved a linear system with 6 unknowns, and derived the Prony polynomial of degree 6 as follows

$$
\Lambda(z) = 1.0000(z^6 + 1) - 14.4845(z^5 + z) + 65.9809(z^4 + z^2) + 108.8070z^3.
$$

The symmetric structure of this polynomial tells us that its zeros appear in (*zj*, *z*−<sup>1</sup> *<sup>j</sup>* ) pairs for *j* = 1, 2, 3, which correspond to three pairs of parameters: (1.0000, −1.0000), (3.0000, −3.0000), and (4.0000, −4.0000) for (*φj*, −*φj*), *j* = 1, 2, 3. Finally, we can solve another linear system for the coefficients *cj*'s with the errors listed in the Table 3.


**Table 3.** Parameters of the function *f*(*x*) in (64) and approximate errors using 12 sampling values with *h* = 0.5.

#### **6. Conclusions**

In this paper, we introduce a method that extends the Prony method to solve the *two-generator sparse expansion problem*. This method relies on the existence of a special linear operator for which the two generators must be the eigenfunctions. This two-generator problem has a special property: The zeros of its Prony polynomial correspond to two sets of parameters, and there is no straightforward way to separate them. We propose a *two-stage least-square detection method* on an overdetermined linear system for each candidate to extract the true solution, which relies on an intrinsic property for the true solution: *Only the true solution can use the coefficients derived from the least-square block to approximate the targeting sampling values out of the least-square block well*. Our method is designed to minimize the computation cost, while still maintain the computation accuracy.

It seems that the idea can be extended to the *k*-generator sparse expansion problem for *k* > 2; however, for the general *k*-generator case, the requirement that there exists a linear operator such that all the generators must be its eigenfunctions becomes *extremely* hard to achieve. For example, in the following sparse expansion problem,

$$f(\mathbf{x}) = \sum\_{j=1}^{M\_1} c\_j \cos(\phi\_j \mathbf{x}) + \sum\_{l=1}^{M\_2} d\_l e^{\theta\_l \mathbf{x}} \, \tag{65}$$

it is not easy to find a linear operator, such that both cos(*φx*) and *eβ<sup>x</sup>* are its eigenfunctions. One may argue that the problem could be solved by converting cos(*φx*) to <sup>1</sup> <sup>2</sup> (*eiφ<sup>x</sup>* + *<sup>e</sup>*−*iφx*), and then it becomes a one-generator problem. Notice that converting a two-generator problem to a one-generator problem may not work most of the time. We are interested in developing a general method that can solve the two-generator sparse expansion problem including the one in (65). We can see that there are many difficult problems to be solved in this multi-generator sparse expansion problem, and we would like to see more researchers contribute in this direction.

Our method for the two-generator sparse expansion problem can handle certain degree of uncertainty. For example, in problem (23), if we know the total number of terms (i.e., the value of *M*<sup>1</sup> + *M*2), but we do not know the number of terms in each summation (i.e., the individual values of *M*<sup>1</sup> and *M*2), we can still solve the problem using our *two-stage least-square detection* method described in Sections 3 and 5. If we increase the uncertainty a little more, can we still solve the problem?

For example, in the problem we considered in Section 4, if we do not know the exact number of terms (it is referred to *unknown order of sparsity M* in [1]) in the following expansion,

$$f(\mathbf{x}) = \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j - \mathbf{x})^2} + \sum\_{j=1}^{M} c\_j e^{-\beta(\phi\_j + \mathbf{x})^2}$$

and we are given *K* equispaced sampling values for some positive integer *K*. If we are told that these sampling values are sufficient to recover the signal, how do we recover it? In other words, we know that the number of terms *M* is in the range 1 ≤ *M* ≤ *K*/4, but we do not know the exact number *M*, can we solve the problem? The answer is *yes*, because we can try all the possible cases: *M* = 1, 2, ... , *K*/4, and for each case, we apply our *two-stage least-square detection* method to tell us if the true solution can be extracted.

However, we are not satisfied with this kind of *exhaustive search type* solution due to its high cost. We plan to develop an efficient *term number detection* method, so that when we make a term number prediction, this method can tell us if it is correct or not immediately. In [1], two methods are proposed: One is based on the rank of the *H* matrix, and the other is based on the singular values of the *H* matrix. The main issue is: How to obtain a *reliable* method to determine the *M* value in the sparse expansion? Only after we obtain the correct term number we will pay the computation cost to go through all the necessary details to find the solution.

**Author Contributions:** Conceptualization, A.H. and W.H.; methodology, A.H. and W.H; software, A.H. and W.H.; validation, A.H. and W.H.; formal analysis, A.H. and W.H.; investigation, A.H. and W.H.; resources, A.H. and W.H.; data curation, A.H. and W.H.; writing—original draft preparation, A.H. and W.H.; writing—review and editing, A.H. and W.H.; visualization, A.H. and W.H. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest.
