Next Article in Journal
The Cădariu-Radu Method for Existence, Uniqueness and Gauss Hypergeometric Stability of Ω-Hilfer Fractional Differential Equations
Previous Article in Journal
Edge Metric and Fault-Tolerant Edge Metric Dimension of Hollow Coronoid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Time–Frequency to Vertex–Frequency and Back

1
Faculty of Electrical Engineering, University of Montenegro, 81000 Podgorica, Montenegro
2
Center for Artificial Intelligence and Cybersecurity, Faculty of Engineering, University of Rijeka, 51000 Rijeka, Croatia
3
Faculty of Electrical Engineering, Imperial College London, London SW72AZ, UK
4
Electrical Egineering, University Cote d’Azur, 06100 Nice, France
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(12), 1407; https://doi.org/10.3390/math9121407
Submission received: 17 May 2021 / Revised: 6 June 2021 / Accepted: 11 June 2021 / Published: 17 June 2021
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
The paper presents an analysis and overview of vertex–frequency analysis, an emerging area in graph signal processing. A strong formal link of this area to classical time–frequency analysis is provided. Vertex–frequency localization-based approaches to analyzing signals on the graph emerged as a response to challenges of analysis of big data on irregular domains. Graph signals are either localized in the vertex domain before the spectral analysis is performed or are localized in the spectral domain prior to the inverse graph Fourier transform is applied. The latter approach is the spectral form of the vertex–frequency analysis, and it will be considered in this paper since the spectral domain for signal localization is well ordered and thus simpler for application to the graph signals. The localized graph Fourier transform is defined based on its counterpart, the short-time Fourier transform, in classical signal analysis. We consider various spectral window forms based on which these transforms can tackle the localized signal behavior. Conditions for the signal reconstruction, known as the overlap-and-add (OLA) and weighted overlap-and-add (WOLA) methods, are also considered. Since the graphs can be very large, the realizations of vertex–frequency representations using polynomial form localization have a particular significance. These forms use only very localized vertex domains, and do not require either the graph Fourier transform or the inverse graph Fourier transform, are computationally efficient. These kinds of implementations are then applied to classical time–frequency analysis since their simplicity can be very attractive for the implementation in the case of large time-domain signals. Spectral varying forms of the localization functions are presented as well. These spectral varying forms are related to the wavelet transform. For completeness, the inversion and signal reconstruction are discussed as well. The presented theory is illustrated and demonstrated on numerical examples.

1. Introduction

Processing of big data, whose domain is irregular and can be represented by a graph, has attracted significant research interest [1,2,3,4,5,6,7,8,9,10]. For big data, the possibility of using smaller and localized subsets of the available information is crucial for their efficient analysis and processing [11]. In addition, in practical applications when large graphs are used as the signal domain, we are commonly interested in localized analysis than in global behavior. In order to characterize the vertex-localized behavior of signals and their narrow-band spectral properties, the joint vertex–frequency domain analysis is introduced. This analysis represents a natural analogy to the time–frequency analysis, a well-established area in classical signal processing [12,13,14].
In classical signal analysis, the basic short-time Fourier transform approach uses window functions to localize the signal in time, while the projection of such a windowed signal onto Fourier transform basis functions provides its spectral localization. Time localization, combined with the modulation by the basis functions, produces kernel functions for classical time–frequency analysis. The classical time–frequency analysis approach has been extended to vertex–frequency analysis for signals defined on graphs [15,16,17,18,19,20,21,22]. This generalization is not straightforward, since graph is a complex and irregular signal domain. Namely, even a time-shift operation, which is trivial in classical time-domain analysis, cannot be straightforwardly generalized to the graph signal domain. This has resulted in several approaches to define vertex–frequency kernels. One approach is based on the vertex domain windows defined using the graph spectral domain [23]. The vertex domain windows can also be fully defined in the vertex domain, using the vertex neighborhood [19].
The vertex domain approaches are based on local analysis with a vertex neighborhood and can be very efficient in the large graph analysis. This paper will focus on the vertex–frequency kernels defined in the spectral domain, with spectral shifts performed as in classical signal analysis, while the vertex shifts are implemented in an indirect way, using the basis functions. This approach produces practically very efficient forms, especially when combined with polynomial approximations of the analysis kernels. This paper’s primary goal is to provide a strong link of the time–frequency analysis with vertex–frequency analysis and to indicate some new possibilities for simple methods in the time–frequency analysis of large-duration signals based on the vertex–frequency forms. Conditions for the signal reconstruction, known as overlap-and-add (OLA) method and weighted overlap-and-add (WOLA) are considered, and the window forms from the classical signal analysis are adapted to satisfy these conditions, with appropriate comments related to their application to the vertex–frequency analysis, when the eigenvalues are used instead of the frequency.
The paper is structured as follows. Basic definitions in graph theory and signals on graphs, including the graph Fourier transform, are reviewed in Section 2. A solid formal relation between the classical signal processing paradigm and graph signal processing is provided in Section 3, where benchmark graphs and signals are introduced. In Section 4, the spectral-domain localized graph Fourier transform is presented, along with a few simple basic implementation forms. The general OLA and WOLA conditions for analysis in the graph spectral domain are introduced, with illustration on several windows for each of these conditions, including the spectral domain wavelet-like transform. The polynomial approximations of the presented kernels are the topic of Section 5, where the Chebyshev polynomial series, least-squares approximation, and Legendre polynomial approximation are presented. Inversion of the local graph Fourier transform is elaborated on in Section 6, where both of the defined kernel forms are analyzed. The support uncertainty principle in the general form (such that it can be used for graph signals) is presented in Section 7, along with the discussion on the relation of the local graph Fourier transform support and the kernel function width in the spectral domain. The possibility of splitting large signals into smaller parts and simplifying the analysis of such signals is considered in Section 8. The presented theory is illustrated in numerous examples. The manuscript closes with summarized conclusions and the reference list.

2. Basic Graph Definitions

A graph consists of N vertices, n V = { 1 , 2 , , N } , which are connected with edges. The weight of edges are W m n [24,25,26]. For the vertices m and n which are not connected, by definition W m n = 0 . The weights of edges are the elements of an N × N matrix, W . The graphs can be directed and undirected. For undirected graphs it is assumed that the vertices m and n are connected by the same edge weight in both directions, resulting in a symmetric weight matrix W , when W = W T holds. A graph is unweighted if all nonzero elements of its weight matrix, W , are equal to 1. In this case the weight matrix, W , assumes specific form and the edges are represented by a connectivity or adjacency matrix, A . In addition to the adjacency and weight matrix, A or W , in graph theory several other matrices are used. All of them can be derived from the adjacency and weight matrix. A matrix that indicates the vertex degree in a graph is called the degree matrix. It is of diagonal form and its common notation is D . The elements D n n of the degree matrix are obtained as a sum of all weights corresponding to the edges connected to the considered vertex, n. The diagonal elements of D are equal to D n n = m W m n . A combination of the weight matrix, W , and the degree matrix, D , produces one of the most commonly used matrix in the graph theory, the graph Laplacian. It is defined by
L = D W .
In the case of an undirected graph, the symmetric form of the weight matrix results in a symmetric graph Laplacian, L = L T .
The eigendecomposition of the graph matrices (for example, of the graph Laplacian L or the adjacency matrix A ) is used for spectral analysis of graphs and graph signals. The eigendecomposition of a graph Laplacian (or any other matrix) relates its eigenvalues, λ k , and the corresponding eigenvectors, u k , by
L u k = λ k u k , for k = 1 , 2 , , N .
where λ 1 , λ 2 , …, λ N , are not necessarily distinct. Since the graph Laplacian is a real-valued symmetric matrix, it is always diagonalizable, that is, the geometric multiplicity equals the algebraic multiplicity for every eigenvalue. The previous N equations can then be written in a compact matrix form (the eigendecomposition relation for diagonizable matrices) as
L U = U Λ .
The transformation matrix U consists of the eigenvectors, u k , k = 1 , 2 , , N , as its columns, while Λ is matrix of diagonal form, whose diagonal elements are λ k , k = 1 , 2 , , N .
The same eigendecomposition relation can be used for the adjacency matrix A u k = λ k u k , k = 1 , 2 , , N .
For diagonalizable matrix there exist a set of orthonormal eigenvectors. They are used as the transformation basis functions for the definition of the graph Fourier transform (GFT),
X = [ X ( 1 ) , X ( 2 ) , , X ( N ) ] T ,
of a graph signal, x = [ x ( 1 ) , x ( 2 ) , , x ( N ) ] T . The graph signal value at a vertex n is denoted by x ( n ) , n = 1 , 2 , , N , while the notation x is used for the vector of signal values at all vertices. The vector of the GFT of a graph signal x will be denoted by X , and the elements (components) of the GFT vector by X ( k ) , k = 1 , 2 , , N . The elements of a graph signal at a vertex n, x ( n ) , can then be written as a linear combination of the eigenvectors
x ( n ) = IGFT { X ( k ) } = k = 1 N X ( k ) u k ( n ) ,
where the basis function values u k , are the elements of the k-th eigenvector, u k , at the vertex n, n = 1 , 2 , , N . This is the definition of the inverse graph Fourier transform (IGFT).
Matrix form of the IGFT is x = U X . For real and symmetric matrices (corresponding to undirected graphs) the transformation matrix U is orthogonal, U U T = I , that is U 1 = U T . Then the graph Fourier transform (GFT) is defined by X = U 1 x = U T x or in element-wise form
X ( k ) = GFT { x ( n ) } = n = 1 N x ( n ) u k ( n ) .
For undirected graphs, both the Laplacian and the adjacency matrix are symmetric, resulting in real-valued eigenvectors and the resulting transformation matrices. However, for directed circular graphs, the eigenvalues (and eigenvectors) of the adjacency matrix are complex-valued. Then, the elements of the inverse transformation matrix U should be used in the GFT definition. When U 1 = U H holds (normal matrices), the complex-conjugate basis functions, u k ( n ) , are used in (2).

3. Classical Signal Processing within the Graph Signal Processing Framework

The graph signal processing will be related to classical time–frequency analysis in two ways: (1) using the directed circular graph and its adjacency matrix, or (2) using the undirected circular graphs and the graph Laplacian. These two relations are discussed next.
Directed circular graph. The signal values, x ( n ) , in classical signal processing systems, are defined in a well-ordered time domain, defined by the time instants denoted by n = 1 , 2 , , N . In the DFT-based classical analysis it has also been assumed that the signal is periodic. The domain of such signals is illustrated in Figure 1 for N = 8 .
Consider next a classical form of a discrete-time finite impulse response (FIR) system. The input–output relation for this system is given by
y ( n ) = h 0 x ( n ) + h 1 x ( n 1 ) + h 2 x ( n 2 ) + + h M x ( n M ) .
In order to make a connection with graphs and graph notation of the signal domain, notice that this input–output relation of the FIR system can be written in the matrix form as
y = h 0 x + h 1 Ax + h 2 A 2 x + + h M A M x ,
where
y = y ( 1 ) y ( 2 ) y ( 3 ) y ( N ) , x = x ( 1 ) x ( 2 ) x ( 3 ) x ( N ) , and A = 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 .
For this system, the time instants are well-ordered and their connectivity matrix is given by the adjacency matrix. The instants (in graph notation vertices) relation is defined by
  • A m n = 1 if the vertex or instant m is a predecessor (connected) to the instant (vertex) n, and
  • A m n = 0 otherwise,
as shown in Figure 1 (left), for N = 8 . Rows of the adjacency matrix indicate the corresponding vertex connectivity. In the first row, there is value 1 at the position N. It means that the vertex 1 is related to vertex N by a directed edge between these two vertices. In the second row there is value 1 at the first position, meaning that the vertex 2 is connected to vertex 1 with an edge, as shown in in Figure 1 (left).
The elements of the shift matrix relation y = Ax are y ( n ) = x ( n 1 ) , as it has been expected for a simple delay operation. The delay for two instants, y ( n ) = x ( n 2 ) , is calculated as A ( Ax ) = A 2 x , and so on.
Now, we will perform the eigendecomposition of this adjacency matrix A . According to the eigendecomposition relation
A u k = λ k u k ,
or in the matrix form
A U = U Λ or A = U Λ U 1 .
Recall that U is the matrix whose columns are the eigenvectors, u k , and the eigenvalue diagonal matrix is Λ . The eigenvalues λ k are on the diagonal of this matrix. The adjacency matrix of the directed circular graph is diagonalizable because all its eigenvalues are distinct. The adjacency matrix of a directed circular graph is a circulant matrix. As it is well-known that this kind of matrix is diagonalizable by the discrete Fourier transform [27] (as it will be shown next). In general, the adjacency matrix of a directed graph may not be diagonalizable, when the Jordan form should be used [1] and Appendix A in [3]. This kind of graphs is not considered in the paper. The input–output relation of a classical FIR system (3) can now be written as
y = h 0 x + h 1 U Λ U 1 x + + h M U Λ M U 1 x ,
where the eigendecomposition property
A M = U Λ M U 1 ,
is used.
Now, by left-multiplication by U 1 we can write
U 1 y = h 0 U 1 x + h 1 Λ U 1 x + + h M Λ M U 1 x ,
or
Y = ( h 0 + h 1 Λ + h 2 Λ 2 + + h M Λ M ) X = H ( Λ ) X ,
where
Y = U H y and X = U H x ,
are the discrete Fourier transforms (DFT) of the output signal, y , and the input signal, x . The diagonal transfer function is denoted by H ( Λ ) and its elements are given by
H ( λ k ) = Y ( k ) X ( k ) ,
for X ( k ) 0 . Indeed, the presented forms represent the well-known classical DFT-based relations. In order to confirm this conclusion, we will analyze the eigenvalue relation for the presented adjacency matrix, A ,
det ( A λ I ) = 0 .
The corresponding characteristic polynomial is given by
det ( A λ I ) = λ N 1 = 0 .
Since 1 = e j 2 π ( k 1 ) , the solutions for the eigenvalues and eigenvectors are
λ k = e j 2 π ( k 1 ) / N with u k ( n ) = 1 N e j 2 π ( k 1 ) ( n 1 ) / N ,
for k = 1 , 2 , , N . The eigenvectors are equal to the DFT basis functions, normalized in such a way that their energy is unity.
We can easily arrive to the element-wise form of the DFT using the GFT definition given in (2).
For implementation issues that will be addressed later, it is crucial to notice that for the calculation of A M x we need only the signal neighborhood M with respect to the each considered instant, n. In the time domain, it means the distance defined by ( n M ) . The fact that A M x requires the signal samples within M neighborhood of the considered vertex (instant) will hold for general graphs. The local neighborhood based calculation is of key importance when large graphs are analyzed or signals, representing big data on large graphs, are processed.
Undirected circular graph. When the circular graph is not directed, as shown in Figure 1 (right), then we should assume that every instant (vertex), n, is connected to both the predecessor vertex (instant), n 1 , and to the succeeding vertex, n + 1 . The adjacency or weight matrix for this kind of connection, A = W , and the corresponding graph Laplacian, defined by L = D W , are given by
W = 0 1 0 0 0 0 1 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 , D = 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 2 , L = D W = 2 1 0 0 0 0 1 1 2 1 0 0 0 0 0 1 2 1 0 0 0 1 0 0 0 0 1 2 ,
where D is the diagonal (degree) matrix with elements D n n = m W m n . The eigendecomposition relation written for the graph Laplacian, L u k = λ k u k , in element-wise form is given by
L u k ( n ) = u k ( n 1 ) + 2 u k ( n ) u k ( n + 1 ) = λ k u k ( n ) ,
where L x ( n ) are the elements of the vector Lx .
The solution to the difference equation of the second order, (4), can be obtained in the form
u k ( n ) = cos 2 π ( k 1 ) ( n 1 ) N + ϕ k ,
with the eigenvalue
λ k = 2 1 cos 2 π ( k 1 ) N = 4 sin 2 1 2 2 π ( k 1 ) N = 4 sin 2 1 2 ω k .
For each of the eigenvalues, we can define two distinct orthogonal eigenvectors in quadrature, for example, using ϕ k = 0 and ϕ k = π / 2 in (5). These two eigenvectors correspond to the classical sinusoidal basis functions, cos ( 2 π ( k 1 ) ( n 1 ) / N ) and sin ( 2 π ( k 1 ) ( n 1 ) / N ) , in the Fourier series analysis of real-valued signals. The exceptions are the eigenvalues λ 1 = 0 and the last eigenvalue for an even N, when there is only one basis function. The sine and cosine functions should be normalized to the unit energy, to represent eigenvectors. Therefore, a definition of the graph Laplacian eigenvalues and eigenvectors for an undirected circular graph (for an even N, for example, N = 8 ), taking into account all previous properties, is given by
λ 1 = 0 u 1 ( n ) = 1 8 λ 2 = 4 sin 2 π 8 u 2 ( n ) = 1 4 cos ( 2 π ( n 1 ) 8 ) , λ 3 = 4 sin 2 π 8 u 3 ( n ) = 1 4 sin ( 2 π ( n 1 ) 8 ) λ 4 = 4 sin 2 2 π 8 u 4 ( n ) = 1 4 cos ( 2 π 2 ( n 1 ) 8 ) , λ 5 = 4 sin 2 2 π 8 u 5 ( n ) = 1 4 sin ( 2 π 2 ( n 1 ) 8 ) λ 6 = 4 sin 2 3 π 8 u 6 ( n ) = 1 4 cos ( 2 π 3 ( n 1 ) 8 ) , λ 7 = 4 sin 2 3 π 8 u 7 ( n ) = 1 4 sin ( 2 π 3 ( n 1 ) 8 ) λ 8 = 4 sin 2 4 π 8 u 8 ( n ) = 1 4 cos ( 2 π 4 ( n 1 ) 8 ) .
The smallest eigenvalue, λ 1 = 0 , corresponds to a constant vector, u 1 ( n ) = 1 / 8 , while the largest eigenvalue, λ 8 = 4 , corresponds to the fastest-varying eigenvector u 8 ( n ) = ( 1 ) ( n 1 ) / 4 .
Smoothness and local smoothness. Notice that for an undirected circular graph and small frequency, ω k 2 , the relation in (6) can be approximated by
λ k = 4 sin 2 ( 1 2 ω k ) ω k 2 .
This relation means that the graph Laplacian eigenvalue, λ k , corresponding to the eigenvector, u k , can be related to the classical frequency (squared), ω k 2 , of a sinusoidal basis function in classical Fourier series analysis.
In general, it is easy to show that the eigenvalue of the graph Laplacian can be used to indicate the speed of change (called the smoothness) of an eigenvector or a graph signal, in general. Namely, if we left-multiply by u k T both sides of the eigenvalue definition relation L u k = λ k u k we obtain u k T L u k = λ k , since u k T u k = 1 . Now the quadratic form
λ k = u k T L u k = 1 2 m n ( u k ( n ) u k ( m ) ) 2 W m n ,
measures the change of neighboring values ( u k ( n ) u k ( m ) ) 2 , weighted by W m n . Fast changes of u k ( n ) produce large values of λ k , while the constant u k ( n ) results in λ k = 0 .
The local smoothness can be defined for a vertex n. It will be denoted by λ ( n ) . This parameter corresponds to the classical time-varying (instantaneous) frequency, ω ( t ) , defined at an time-instant t, in the form [28]
λ ( n ) = L x ( n ) x ( n ) .
In this relation we used L x ( n ) to denote the n-th element of the vector Lx . It has been assumed that x ( n ) 0 . If we use x ( n ) = cos ( ω k n ) and the graph Laplacian of an undirected circular graph, as in (4), we obtain the value from (8). In general, if the signal x ( n ) is equal to an eigenvector u k ( n ) at the vertex n and at its neighboring vertices, then λ ( n ) = λ k .
System on a general graph. The relations presented in this section are the special cases of the general graph Fourier transform (Section 2) and systems for graph signals.
The most important difference between the classical systems and the systems for graph signals is in the fact that the standard shift operator, x ( n ) = x ( n 1 ) , just moves a signal sample from one instant, n, to another instant, ( n 1 ) , while the graph shift operator, y = A x or y = L x , moves the signal sample to all neighboring vertices (in the case of graph Laplacian, in addition to the signal being moved to the neighboring vertices (with a change of sign), its sample is kept at the original vertex as well). Notice that the graph shift operator does not satisfy the isometry property since the shifted signal’s energy is not the same as the energy of the original signal. In analogy with the role of time shift in standard system theory, a system for graph signals is implemented as a linear combination of a graph signal and its graph shifted versions,
y = h 0 L 0 + h 1 L 1 + + h M 1 L M 1 x = m = 0 M 1 h m L m x = H ( L ) x ,
where, by definition L 0 = I , while h 0 , h 1 , …, h M 1 are the system coefficients. The spectral form of this relation is given by
Y = h 0 + h 1 Λ + h 2 Λ 2 + + h M Λ M X = H ( Λ ) X ,
where H ( Λ ) is a diagonal matrix representing the transfer function of the system for a graph signal. Notice that if the transfer function, in general, can be written in a form of polynomial, as in (11), then the system can be implemented using the graph-shifted forms of the signal, L x , L 2 x , , up to L M 1 x , as in (10), which require only ( M 1 ) neighborhoods of each signal sample to obtain the system output, independently of the size of the considered graph.
Graph signal filtering—Graph convolution. Three approaches to filtering of a graph signal using a system whose transfer function is G ( Λ ) , with elements on the diagonal G ( λ k ) , k = 1 , 2 , , N , will be presented next.
(i)
The simplest approach is based on the direct employment of the GFT. It is performed by:
(a)
Calculating the GFT of the input signal, X = U 1 x ,
(b)
Finding the output signal GFT by multiplying X by G ( Λ ) , Y = G ( Λ ) X ,
(c)
Calculating the output (filtered ) signal as the inverse DFT of Y , y = U Y .
The result of this operation,
y ( n ) = x ( n ) g ( n ) = IGFT { GFT { x ( n ) } GFT { g ( n ) } } = IGFT { X ( k ) G ( λ k ) } ,
is called a convolution of signals on a graph [25,29].
However, this procedure could be computationally unacceptable for very large graphs.
(ii)
A possibility to avoid the full size transformation matrices for large graphs, is to approximate the filter transfer function, G ( λ ) , at the positions of the eigenvalues, λ = λ k , k = 1 , 2 , , N , by a polynomial, h 0 + h 1 λ + h 2 λ 2 + + h M λ M , that is
h 0 + h 1 λ k + h 2 λ k 2 + + h M λ k M = G ( λ k ) , k = 1 , 2 , , N .
Then the system of N equations
V h = diag { G } ,
is solved, in the least squared sense, for M < N unknown parameters of the system h = [ h 0 , h 1 , , h M ] T , with a given M and
diag { G } = [ G ( λ 1 ) , G ( λ 2 ) , , G ( λ N ) ] T
is the column vector of diagonal elements of G . The elements of matrix V are V ( k , m ) = λ k m , m = 0 , 1 , , M , k = 1 , 2 , , N (Vandermonde matrix).
This system can efficiently be solved for a relatively small M. Then, the implementation of the graph filter is performed in the vertex domain using the so obtained h 0 , h 1 , , h M in (10) and the M-neighborhood of a every considered vertex. Notice that the relation between the IGFT of diag { G } and the system coefficients h 0 , h 1 , , h M is direct in the classical DFT case only, while it is more complex in the general graph case [25].
For large M, the solution to the system of equations in (12), for the unknown parameters h 0 , h 1 , , h M can be numerically unstable due to large values of the powers λ k M for large M.
(iii)
Another approach that allows us to avoid the direct GFT calculation in the implementation of graph filters is in approximating the given transfer function, G ( λ ) , by a polynomial H ( λ ) , using continuous variable λ .
This approximation does not guarantee that the transfer functions G ( λ ) and its polynomial approximation H ( λ ) will be close at a discrete set of points λ = λ p , p = 1 , 2 , , N . The maximal absolute deviation of the polynomial approximation can be kept as small as possible using the so-called min–max polynomials. After the polynomial approximation is obtained the output of the graph system is calculated using (10), that is
y = m = 0 M 1 h m L m x = H ( L ) x .
This approach will be presented in Section 5.
Case study examples. In the next example we shall introduce two graphs and signals on these graphs, which will be used as benchmark models for the analysis that follows.
Example 1.
Two graphs are shown in Figure 2. A circular undirected unweighted graph represents the domain for classical signal analysis, with each of N = 100 vertices (instants) being connected to the predecessor and successor vertices (top panel). A general form of a graph, with the same number of N = 100 vertices, is shown in Figure 2 (bottom). These two graphs will be further used to demonstrate classical and graph signal processing principles and relations.
A signal on the circular graph is shown in Figure 3 (top). We have formed this synthetic signal using parts of three graph Laplacian eigenvectors (corresponding to three harmonics in classical analysis). For the vertices in the subset V 1 = { 1 , 2 , 3 , , 40 } V , the eigenvector (harmonic) with the spectral index k = 16 was used.
For the subset V 2 = { 41 , 42 , 43 , , 70 } , the eigenvector u k ( n ) , with k = 84 , is used to define the signal. The eigenvector with spectral index k = 29 was used to define the signal on the remaining set of vertices, V 3 V .
A signal on the general graph is shown in Figure 3 (bottom). It is also composed of parts of three Laplacian eigenvectors. For the vertices in V 1 , the eigenvector with spectral index k = 12 has been used.
For the subset of vertices V 2 , containing the vertex indices ranging from n = 41 to n = 70 , the eigenvector u k ( n ) with k = 84 was used to define the signal. Within the subset, V 3 = V \ ( V 1 V 2 ) , the spectral index was k = 29 . Supports of these three components are designated by different vertex colors.
The local smoothness index λ ( n ) , which corresponds to the speed of change of the corresponding components, λ ( n ) = λ k , is shown in Figure 4 for the presented graph signals. The local smoothness in the classical signal analysis is related to the instantaneous frequency of each signal components as λ ( n ) = 4 sin 2 ( ω ( n ) / 2 ) .
Other graph shift operators. Finally, notice that in relation (10), we used the graph Laplacian, L , as the shift operator. In addition to the adjacency matrix, A , as another common choice for the shift operation, the normalized version of the adjacency matrix, ( A / λ max ), normalized graph Laplacian ( D 1 / 2 L D 1 / 2 ), or the random walk (also called diffusion) matrix, ( D 1 W ), may be used as graph shift operators, producing corresponding spectral forms of the systems for graph signals [30].
Remark 1.
The normalized graph Laplacian,
L N = D 1 / 2 L D 1 / 2 = D 1 / 2 ( D W ) D 1 / 2 = I D 1 / 2 W D 1 / 2
is used as a shift operator in the first-order system, to define the convolution operation and the convolution layer in the graph convolutional neural networks (GCNN). Its form is
y = h 0 L N 0 + h 1 L N 1 x = ( h 0 + h 1 ) x h 1 D 1 / 2 W D 1 / 2 x .
Using this relation, the input, x c ( l 1 ) and the output, x c ( l ) of the c-th channel of the l-th convolution layer in the GCNN are implemented as
x c ( l ) = w 0 c ( l 1 ) x c ( l 1 ) + w 1 c ( l 1 ) D 1 / 2 W D 1 / 2 x c ( l 1 ) .
where the weight w 0 c ( l 1 ) , in the c-th channel of the l-th convolution layer, corresponds to the weight ( h 0 + h 1 ) in (14) and w 1 c ( l 1 ) corresponds to ( h 1 ) in (14).

4. Spectral Domain Localized Graph Fourier Transform (LGFT)

Classical short-time Fourier transform (STFT) admits time–frequency localization of the analyzed signal using the Fourier transform of the windowed and shifted versions of the signal. This principle is possible in the graph signal processing [29,31]. However, since this approach requires sophisticated approaches to the vertex shift operation on the signals, the spectral domain localization is more commonly used in vertex–frequency analysis. Although, the spectral domain is possible and well-defined in classical analysis, it has been rarely used for time–frequency analysis of signals. The time–frequency localization of a signal in the spectral domain is obtained using a spectral domain localization window, which is shifted in frequency, while the time shift is achieved by the modulation of the windowed Fourier transform of the signal.
We shall use the spectral approach to perform vertex–frequency localization. The graph Fourier transform localized in the spectral domain (LGFT) is defined as an inverse graph Fourier transform of the graph Fourier transform, X ( p ) , multiplied by a spectral domain window, H ( k p ) . The spectral domain window is nonzero at and around the spectral index k. Therefore, the element-wise LGFT is calculated using
S ( m , k ) = p = 1 N X ( p ) H ( k p ) u p ( m ) .
The shift is here performed in the well-ordered spectral domain, along the spectral index k, instead of the more complex signal shift in the vertex domain. As it will be shown, this form of the vertex–frequency analysis, will also allow vertex localized implementations of the vertex–frequency analysis, even without calculation of the graph Fourier transform of the signal, which is of crucial importance in the case of very large graphs.
Remark 2.
The counterpart of (16) in the classical time–frequency analysis is well-known short-time Fourier transform (STFT) [12]
S ( m , k ) = 1 N p = 1 N X ( p ) H ( k p ) e j 2 π N ( m 1 ) ( p 1 ) ,
where H ( k ) is a frequency domain localization window.
The LGFT defined in the spectral domain by (16) can be realized by using bandpass transfer functions, denoted by H k ( λ p ) = H ( k p ) . Then the LGFT definition is given by
S ( m , k ) = p = 1 N X ( p ) H k ( λ p ) u p ( m ) .
The transfer function in (17), H k ( λ p ) , is centered (shifted) at a spectral index, k, by definition. The vertex–frequency domain kernel, H m , k ( n ) , of the form
H m , k ( n ) = p = 1 N H k ( λ p ) u p ( m ) u p ( n ) ,
is obtained from
S ( m , k ) = p = 1 N n = 1 N x ( n ) u p ( n ) H k ( λ p ) u p ( m ) = n = 1 N x ( n ) H m , k ( n ) = x ( n ) , H m , k ( n ) .
Remark 3.
In classical time–frequency analysis the elements of the inverse DFT matrix U are equal to u k ( n ) = exp ( j 2 π ( n 1 ) ( k 1 ) / N ) / N and H k ( λ p ) = H k ( e j ω p ) are the bandpass transfer functions, with the kernel
H m , k ( n ) = 1 N p = 1 N H k ( e j ω p ) e j 2 π ( n 1 ) ( p 1 ) / N e j 2 π ( m 1 ) ( k 1 ) / N .
The STFT is then defined as
S ( m , k ) = x ( n ) , H m , k ( n ) .
The matrix form of the vertex–frequency spectrum (17) is
S ( 1 , k ) S ( 2 , k ) S ( N , k ) = u 1 ( 1 ) u 2 ( 1 ) u N ( 1 ) u 1 ( 2 ) u 2 ( 2 ) u N ( 2 ) u 1 ( N ) u 2 ( N ) u N ( N ) H k ( λ 1 ) 0 0 0 H k ( λ 2 ) 0 0 0 H k ( λ N ) X ( 1 ) X ( 2 ) X ( N )
or using vector/matrix notation
s k = U H k ( Λ ) X = U H k ( Λ ) U T x ,
where the column vector whose elements are S ( m , k ) , m = 1 , 2 , , N , is denoted by s k .

4.1. Binomial Decomposition

Consider the simplest decomposition when the total spectral domain of graph signal is divided into K = 2 bands. These two bands, indexed by k = 0 and k = 1 , cover the low-pass part and high-pass part of spectral content of signal, respectively. First we will use the linear functions of eigenvalue λ , to achieve these properties
H 0 ( λ p ) = ( 1 λ p λ max ) , H 1 ( λ p ) = λ p λ max .
Using the relation between (10) and (11) we can conclude that the vertex-domain implementation of this kind of LGFT analysis is very simple
s 0 = ( I 1 λ max L ) x , s 1 = 1 λ max L x ,
and for each vertex, m, the calculation of S ( m , 0 ) = s 0 ( m ) and S ( m , 1 ) = s 1 ( m ) , requires only the combination of the signal at this vertex and its neighboring vertices, to calculate the elements of L x .
Remark 4.
The classical time–frequency analysis counterpart of (23) is obtained using the eigenvalue to frequency relation for the circular undirected graph λ = 2 sin 2 ( ω / 2 ) to produce low-pass and high-pass type transfer functions
H 0 ( ω ) = 1 sin 2 ( ω / 2 ) = cos 2 ( ω / 2 ) , H 1 ( ω ) = sin 2 ( ω / 2 ) ,
as shown in Figure 5 (top). These spectral transfer functions are dual to the classical Hann (raised cosine) window forms, used for signal localization in the time domain.
To improve the spectral resolution and to divide the spectral range into more than two bands. we can use the same transfer function forms by applying them to the low-pass part of the signal and dividing the spectral content of this part of the signal into its low-pass part and its high-pass part. In classical signal processing, two common approaches are applied:
(a)
The high-pass part is kept unchanged, while the low-pass part is split. This approach corresponds to the wavelet transform or the frequency-varying classical analysis.
(b)
The high-pass part is also split into its low-pass and high-pass parts to keep the frequency resolution constant for all frequency bands.
Next, we consider these two approaches for the division of frequency bands.
(a)
In a two-scale wavelet-like analysis we keep the high-pass part s 1 , while the low-pass part, s 0 , is split in its low-pass part, s 00 and high-pass part, s 01 , using the same transfer function, as
s 1 = 1 λ max L x , s 00 = I L λ max 2 x , s 01 = I L λ max L λ max x .
For the third scale step we would keep s 1 and the high-pass part of the scale two step, s 01 , and then split the low-pass part in scale two, s 00 , into its low-pass part, s 000 , and high-pass part, s 001 , using
s 000 = I L λ max 3 x , s 001 = I L λ max 2 L λ max x .
This process could be continued until the desired scale (frequency resolution) is reached.
(b)
For the uniform frequency bands both the low-pass and the high-pass bands are split in the same way, to obtain
s 00 = I L λ max 2 x , s 01 = 2 I L λ max L λ max x , s 11 = L 2 λ max 2 x .
Notice that this kind of spectral band division will produce two times the same result. Once when the original low-pass part is multiplied by the high-pass function, and then again when the original high-pass part is multiplied by the low-pass function. This is the reason why the constant value of 2 has appeared in the new middle pass-band, s 01 .
The bands in relation (24) can be obtained as the terms of the binomial expression ( I L / λ max ) + L / λ max 2 x . If we continue to the next level, by multiplying all the elements in (24) by the low-pass part, ( I L / λ max ) , and then by the high-part part, L / λ max , after grouping the same terms, we would obtain the signal bands of the same form as the terms of the binomial ( I L / λ max ) + L / λ max 3 x . We can conclude that the division can be performed into K bands corresponding to the terms of a binomial form
( I L / λ max ) + L / λ max K x .
The transfer function of the k-th, k = 0 , 1 , 2 , , K , term, has the vertex domain form
H k ( L ) = K k I 1 λ max L K k 1 λ max L k .
Of course, the sum of all parts of signal, filtered by H k ( L ) , produces the reconstruction relation, k = 0 K H k ( L ) x = x , what is obvious from the identity in (25), that is, from ( I L / λ max ) + L / λ max = I .
Example 2.
The spectral domain transfer functions H k ( λ p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , which correspond to classical time–frequency processing and the binomial form terms for K = 2 , K = 3 , and K = 26 are shown in Figure 5. The last two panels (the third and fourth panel) show the case with K = 26 . In the third panel, the amplitudes of every transfer function is normalized. In the fourth panel, all transfer functions for K = 26 are shown, without the amplitude normalization.
Example 3.
For a general graph, the spectral domain transfer functions H k ( λ p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , that can be obtained as the terms of the binomial form for K = 2 , K = 3 , and K = 26 are shown in Figure 6. The last two panels (the third and fourth panel) again show the case with K = 26 . In the third panel, the amplitudes of every transfer function is normalized. In the fourth panel, all transfer functions for K = 26 are shown, without the amplitude normalization.
Example 4.
Vertex-domain implementation is based on the multiplication of signal, x , by the graph Laplacian L . For each vertex n it is localized to its neighborhood one. After the signal Lx is calculated, then the new signal L 2 x is easily obtained as the graph Laplacian multiplication with the calculated signal, Lx , that is L 2 x = L L x . This procedure is continued up to the any order, L k x = L L k 1 x .
In classical time–frequency analysis the multiplication by the graph Laplacian of an undirected circular graph, 1 λ max L x with λ max = 4 , is equivalent to the convolution of the signal x with the impulse response of the finite impulse response filter
h = [ 1 / 4 , 1 / 2 , 1 / 4 ] T ,
that corresponds to the transfer function H 1 ( ω ) = sin 2 ( ω / 2 ) = 1 cos ( ω ) / 2 . It means that the high-pass and low-pass part of the signal are obtained as (the element-wise form of the Laplacian operator applied to the signal is given by (4)
s 1 = x [ 1 / 4 , 1 / 2 , 1 / 4 ] T , s 1 ( k ) = 1 4 x ( k 1 ) + 1 2 x ( k ) 1 4 x ( k + 1 ) ,
and
s 0 = x x h = x [ 1 / 4 , 1 / 2 , 1 / 4 ] T , s 0 ( k ) = 1 4 x ( k 1 ) + 1 2 x ( k ) + 1 4 x ( k + 1 ) ,
where denotes convolution operation. These convolutions can be repeated to produce wavelet-like band distribution or uniform distribution of frequency bands.
If no downsampling is used, then the redundant representation of signal is obtained with each of these components containing the same number of samples as the original signal. However, it is possible to form nonredundant form of this representation. Using downsampling with factor of 2, the values
s 1 ( 2 n ) = 1 4 x ( 2 n 1 ) + 1 2 x ( 2 n ) 1 4 x ( 2 n + 1 ) , s 0 ( 2 n ) = 1 4 x ( 2 n 1 ) + 1 2 x ( 2 n ) + 1 4 x ( 2 n + 1 ) ,
are kept. The signal samples at the even indexed instants, 2 n , are easily obtained as
x ( 2 n ) = s 0 ( 2 n ) + s 1 ( 2 n ) ,
while for the samples at the odd indexed instants, 2 n + 1 , we have
x ( 2 n + 1 ) = 2 s 0 ( 2 n ) 2 s 1 ( 2 n ) x ( 2 n 1 ) .
Using the initial condition x ( 1 ) , we can reconstruct all odd-indexed samples. This reconstruction can be noise sensitive for large N, due to repeated recursions in the last relation.
Example 5.
Time–frequency and vertex–frequency analysis based on the binomial decomposition of the signals from Example 1 is performed in this example. The corresponding transfer functions for the time–frequency analysis (circular undirected graph, Figure 2 (top)) and vertex–frequency analysis (general graph, Figure 2 (bottom)) are shown in Figure 5 and Figure 6, respectively. The time–frequency representation of the three-harmonic signal from Figure 3 (top) is shown in Figure 7a, (left panel). Its reassigned version to the position of the maximum distribution value is given in Figure 7a (right panel). The same analysis for the general graph signal from Figure 3 (bottom) is shown in Figure 7b. Finally, in order to present the common complex-valued harmonic form, the signal is composed by adding two corresponding sine and cosine components (as in (7)) and forming the complex-valued components u 16 ( n ) + j u 17 ( n ) , within V 1 , u 84 ( n ) + j u 85 ( n ) , within V 2 , and u 28 ( n ) + j u 29 ( n ) , within V 3 . Time–frequency representation of this signal is given in Figure 7c.
Selectivity of the transfer functions can be improved using higher order polynomials, instead of the linear functions in (23). Assuming that the high-pass part should satisfy H 1 ( 0 ) = 0 and H 1 ( λ max ) = 1 , and that its derivative is zero at the initial interval point, ( λ p = 0 ), and the ending interval point ( λ p = λ max ), as well as that H 0 ( λ ) + H 1 ( λ ) = 1 , we can use the following polynomial forms
H 0 ( λ p ) = ( 1 H 1 ( λ p ) ) , H 1 ( λ p ) = 3 λ p λ max 2 2 λ p λ max 3 .
The vertex-domain implementation is performed according to
s 1 = 3 λ max 2 L 2 x 2 λ max 3 L 3 x , s 0 = ( x s 1 ) .
The same analysis can be now repeated as for (23). These polynomial forms will be revisited later in this paper.

4.2. Hann (Raised Cosine) Window Decomposition

We have presented the simplest decomposition to the low-pass and high-part of a signal. However, the LGFT of the form (17) can be calculated using any other set of bandpass functions, H k ( Λ ) , k = 0 , 1 , , K 1 , as
s k = H k ( L ) x , k = 0 , 1 , , K 1 .
The spline or raised cosine (Hann window) functions are commonly used as bandpass functions. To further illustrate the concepts, we will consider next transfer functions in general form of the shifted raised cosine functions. They are given by
H k ( λ ) = sin 2 π 2 a k b k a k ( λ a k 1 ) , for a k < λ b k cos 2 π 2 b k c k b k ( λ b k 1 ) , for b k < λ c k 0 , elsewhere ,
where the spectral bands for H k ( Λ ) are defined with ( a k , b k ] and ( b k , c k ] , k = 0 , 1 , , K 1 . If spectral bands were uniform within 0 λ λ max , the corresponding intervals are based on
a k = a k 1 + λ max K 1 , b k = a k + λ max K 1 , c k = a k + 2 λ max K 1 ,
with a 1 = 0 and lim λ 0 ( a 1 / λ ) = 1 . Here, only 0 = b 0 λ c 0 = λ max / K is used to define the initial transfer function, H 0 ( λ ) , while the interval a K 1 < λ b K 1 = λ max in (28) is used for the last transfer function, H K 1 ( λ ) . The transfer functions with K = 15 uniform bands and λ max = 4 are shown in Figure 8 (top).
Example 6.
The transfer functions (28) in the eigenvalue (smoothness index) spectral domain, H k ( λ p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , and the frequency domain for the classical analysis (graph analysis on circular undirected unweighted graphs) are shown in Figure 8 (top) and (bottom), respectively, for K = 15 , 0 λ λ max = 4 and 0 ω π . Notice that the relation between these two domains is nonlinear through λ = 4 sin ( ω 2 / 2 ) .
Example 7.
The transfer function for various widths of the Hann window are shown in Figure 9. The most common case with uniform division of the spectral domain, as defined by (29), is given in Figure 9a. Two forms of the spectral dependent widths are shown in Figure 9b,c. While the widths, defined by the constants a k in (29) increase as in the wavelet transform case, the widths of the transfer functions in Figure 9c are kept narrow around the spectral indices of signal components, in order to make finer spectral resolution at these regions (signal adaptive approach). Finally, Figure 9d shows polynomial approximations of the transfer functions form Figure 9a, which will be discussed later.
Example 8.
The transfer functions with various widths of the Hann window form Figure 9 are used for time–frequency representation of the signal on the circular graph from Example 1. The results are shown in Figure Figure 10.
Example 9.
In this example, the same transfer functions form Figure 5 are used for vertex–frequency representation of the signal on the general graph from Example 1. The results are shown in Figure 11.

4.3. General Window Form Decomposition—OLA condition

The spectral transfer functions in the form of the raised cosine transfer function (28) are characterized by
k = 0 K 1 H k ( λ p ) = 1 .
We may use any common window for the decomposition which satisfies this relation. Next, we will list some of these windows:
  • A combination of the raised cosine windows. After one set of the raised cosine windows is defined, we may use another set with different constants a k , b k , c k and overlap it with the existing set. If the window values are divided by 1 / 2 then the resulting window satisfies (30). In this way, we can increase the number of different overlapping windows.
  • Hamming window can be used in the same way as in (28). The only difference is that the Hamming windows sum-up to 1.08 in the overlapping interval, meaning that the result should be divided by this constant.
  • Bartlett (triangular) window with the same constants a k , b k , c k as in (28) satisfies the condition (30), along with combinations with different sets of a k , b k , c k to increase overlapping.
  • Tukey window has a flat part in the middle and the cosine form in the transition interval. It can also be used with appropriately defined a k , b k , c k to take into account the flat (constant) window range.

4.4. Frame Decomposition—WOLA Condition

For the signal reconstruction, using the kernel orthogonality and the frames concept, the windows should satisfy the condition
k = 0 K 1 H k 2 ( λ p ) = 1 .
The graph signal reconstruction can be performed based on (30) and (31), as discussed with more details in Section 6.
Several windows that satisfy the condition in (31) will be presented next:
  • Sine window is obtained as the square root of the raised cosine window in (28). Obviously, this window will satisfy (31). Its form is
    H k ( λ ) = sin π 2 a k b k a k ( λ a k 1 ) , for a k < λ b k cos π 2 b k c k b k ( λ b k 1 ) , for b k < λ c k 0 , elsewhere .
  • A window that satisfies (31) can be formed for any window in the previous section, by taking its square root.
    Example 10.
    For the case of the Hann window and the triangular (Bartlett) window, their corresponding squared root forms, that will produce k = 0 K 1 H k 2 ( λ p ) = 1 , are shown in Figure 12, for a uniform splitting of the spectral domain and a signal-dependent (wavelet-like) form. Notice that the squared root of the Hann window is the sine window form.
    It is obvious that the windows are not differentiable at the ending interval points, meaning that their transforms will be very spread (slow-converging).
    The windows defined as square roots of the presented windows (which originally satisfy the OLA condition), do not satisfy the first derivative continuity property at the ending interval points. For example, the raised cosine window satisfied that property, but its square root (sine) window loses this desirable property Figure 12.
    To restore this property, we may either define new windows or just use the same windows, such as the raised cosine window, and change argument so that the window derivative is continuous at the ending point. This technique is used to define the following window form.
  • Mayer’s window form modifies the square root of the raised cosine window (sine window) by adding the function v x ( x ) in the argument, x, which will make the first derivative continuous at the ending points. In this case, the window functions become [32]
    H k ( λ ) = sin π 2 v x a k b k a k ( λ a k 1 ) , for a k < λ b k cos π 2 v x b k c k b k ( λ b k 1 ) , for b k < λ c k 0 , elsewhere ,
    with a k + 1 = b k , b k + 1 = c k , while the initial and the last intervals are defined as in (29). In order to overcome the non-differentiability of the sine and cosine functions at the interval-end points, the previous argument from (28), of the form
    v x ( x ) = x = a k b k a k ( λ a k 1 ) ,
    is mapped as
    v x ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) ,
    for 0 x 1 with x = a k b k a k ( λ a k 1 ) , producing Meyer’s wavelet-like transfer functions.
    If we now check the derivative of a transfer function, d H k ( λ ) / d λ , at the ending interval points, we will find that it is zero-valued. This was the reason for introducing the nonlinear (polynomial) argument form instead x or λ , having in mind the relation between the arguments x and λ .
    Example 11.
    The transfer functions from the previous example, for the case of the Hann window and the triangular (Bartlett) window of forms that will produce k = 0 K 1 H k 2 ( λ p ) = 1 , whose argument is modified in order to achieve differentiability at the ending points, are shown in Figure 13. Due to differentiability, these transfer functions have a faster convergence than the forms in the previous example, and are appropriate for vertex–frequency and time–frequency analysis. The results of this analysis would be similar to those presented in Figure 10 and Figure 11. The difference exists is in the reconstruction procedure as well.
  • Polynomial windows are obtained if the function v x ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) is applied to the triangular window. Their form is
    H k ( λ ) = v x a k b k a k ( λ a k 1 ) , for a k < λ b k 1 v x b k c k b k ( λ b k 1 ) , for b k < λ c k 0 , elsewhere .
    The simplest polynomial for that would satisfy the conditions v x ( 0 ) = 0 , v x ( 1 ) = 1 , v x ( 0 ) = v x ( 1 ) = 0 is v x ( x ) = a x 3 + b x 2 with a + b = 1 , 3 a + 2 b = 0 , that is
    v x ( x ) = 2 x 3 + 3 x 2 .
    In general, the conditions:
    v x ( 0 ) = 0 , v x ( 1 ) = 1 , d v x ( x ) d x | x = 0 = 0 , d v x ( x ) d x | x = 1 = 0 ,
    are satisfied by
    v x ( x ) = a x n + b x n 1 ,
    for n 3 , if n a + ( n 1 ) b = 0 and a + b = 1 , a = ( n 1 ) , b = n . With n = 5 we obtain
    v x ( x ) = 4 x 5 + 5 x 4 .
    These transfer functions are the extension of the linear forms presented in (23) and could be very convenient for the vertex (time) implementation. The polynomial of the third order in λ will require only neighborhood 3 in the vertex (time) domain implementation.
  • Spectral graph wavelet transform. In the same way as the LGFT can be defined as a projection of a graph signal onto the corresponding kernel functions, the spectral graph wavelet transform can be calculated as the projections of the signal onto the wavelet transform kernels. The basic form of the wavelet transfer function in the spectral domain is denoted by H ( λ p ) . Then, the other transfer functions of the wavelet transform are obtained as the scaled versions of the basic function H ( λ p ) using the scales s i , i = 1 , 2 , , K 1 . The scaled transform functions are H s i ( λ p ) = H ( s i λ p ) [21,22,33,34,35,36].
    The father wavelet is a low-pass scale function denoted by G ( λ p ) . It is a low-pass function, in the same way as in the LGFT was the function H 0 ( λ p ) . The set of scales for the calculation of the wavelet transform is s { s 1 , s 2 , , s K 1 } / The scaled transform functions obtained in these scales are H s i ( λ p ) and G ( λ p ) . Next, the spectral wavelet transform is calculated as a projection of the signal onto the bandpass (and scaled) wavelet kernel, ψ m , s i ( n ) , in the same way as the kernel H m , k ( n ) was used in the LGFT in (18). It means that the wavelet transform elements are
    ψ m , s i ( n ) = p = 1 N H ( s i λ p ) u p ( m ) u p ( n ) ,
    with the wavelet coefficients given by
    W ( m , s i ) = n = 1 N ψ m , s i ( n ) x ( n ) = n = 1 N p = 1 N H ( s i λ p ) x ( n ) u p ( m ) u p ( n ) = p = 1 N H ( s i λ p ) X ( p ) u p ( m ) .
    The Meyer approach to the transfer functions is defined in (33) with the argument v x ( q ( s i λ 1 ) ) . The same form can be applied to the wavelet transform using H ( s i λ p ) and the intervals of the support for this function given by:
    1 / s 1 < λ M / s 1 for i = 1 (sine function in (33)).
    1 / s i < λ M / s i (sine functions in (33)), i = 2 , 3 , , K 1 .
    M / s i < λ M 2 / s i (cosine functions in (33)), i = 2 , 3 , , K 1 .
    where the scales are defined by s i = s i 1 M = M i / λ max .
    The interval for the low-pass function, G ( λ ) , is 0 λ M 2 / s K 1 (cosine function within M / s K 1 < λ M 2 / s K 1 and the value G ( λ ) = 1 as λ 0 ).
    Notice that the wavelet transform is just a special case of the varying transfer function, when the narrow transfer functions are used for low spectral indices and wide transfer functions are used for high spectral indices, as shown in Figure 13b or Figure 10b,d.
    In the implementations, we can use the vertex domain localized polynomial approximations of the spectral wavelet functions in the same way as described in Section 5.
  • Optimization of the vertex–frequency representations. As in classical time–frequency analysis, various measures can be used to compare and optimize joint vertex–frequency representations. An overview of these measures may be found in [37]. Here, we shall suggest the one-norm (in the vector norm sense), introduced to the time–frequency optimization problems in [37], in the form
    M = 1 F m = 1 N k = 0 K 1 | S ( m , k ) | = 1 F S 1 ,
    where F = S F = m = 1 N k = 0 K 1 | S ( m , k ) | 2 is the Frobenius norm of matrix S , used for the energy normalization. The normalization factor can be omitted if S ( m , k ) is a tight frame. Here we will just underline that the functions, S ( m , k ) , are referred to as a frame. In the case of a graph signal, x , the set functions S ( m , k ) , is a frame is [22]
    a | | x | | 2 2 k = 0 K 1 m = 1 N | S ( m , k ) | 2 b | | x | | 2 2 ,
    holds, with a and b being positive constants. This constants determine the stability of reconstructing the signal from the values S ( m , k ) . A frame is called Parseval’s tight frame if a = b . The LGFT, as given by in (17), represents Parseval’s tight frame when
    k = 0 K 1 m = 1 N | S ( m , k ) | 2 = k = 0 K 1 p = 1 N | X ( p ) H k ( λ p ) | 2 = E x = c o n s t a n t .
    Notice that Parseval’s theorem is used for the LGFT, S ( m , k ) , as it is the GFT of the spectral windowed signal, X ( p ) H k ( λ p ) . With this fact in mind we obtain
    m = 1 N | S ( m , k ) | 2 = p = 1 N | X ( p ) H k ( λ p ) | 2 .
    The LGFT defined by (17) is a tight frame if the condition in (31) or (49) holds. This is the condition that is used to define transfer functions shown in Figure 5b,c.

5. Polynomial LGFT Approximation

Let us assume that the spectral domain localization window in the LGFT corresponds to a transfer function of a bandpass graph system, H k ( λ p ) . In the case of very large graphs, for the vertex domain realization of the LGFT, it is of crucial importance to define or approximate this transfer function by a polynomial
H k ( λ p ) = h 0 , k + h 1 , k λ p + + h M 1 , k λ p M 1 ,
where k = 0 , 1 , , K 1 , K is the number of spectral bands and M is the polynomial order. It is assumed that transfer function H k ( λ p ) is centered at an eigenvalue, λ k , and is of a bandpass type around it (as in (17)). The vector form of the LGFT, S ( m , k ) , defined for the vertex index m and spectral index k by (18) is given as follows
s k = U H k ( Λ ) U T x = H k ( L ) x = p = 0 M 1 h p , k L p x ,
In vector notation, s k is a column vector, whose elements are equal to S ( m , k ) , m = 1 , 2 , , N . The property of the eigendecomposition of a power matrix is used to obtain this result. The number of shifted transfer functions, K, does not depend on the number of indices N. The realization of the LGFT is based on the linear combination of the graph signal shifts L p x , and does not require any graph Fourier transform or other operation on the entire graph.
For this reason the bandpass LGFT functions, H k ( λ ) , k = 0 , 1 , , K 1 , in the form given by (28) or (33) should be realized using their approximations by polynomials whose order is ( M 1 ) . Although the approximation based on the Chebyshev polynomial is most commonly used for this purpose [25,31], we will revisit alternative approaches [38] as well.

5.1. Chebyshev Polynomial

The transfer functions in the graph relations, denoted by H k ( λ ) , are defined at discrete set of eigenvalues λ = λ p . The polynomial approximation is obtained by using a function that is continuous. Its argument is within the range 0 λ λ max . Then the optimal choice for the polynomial approximation type are the so-called “min–max” Chebyshev polynomials. They have the property that the maximal possible deviation from the desired function is minimal. This property is of crucial importance since we approximate the transfer functions in continuous λ , which will be used as transfer functions at a discrete set of eigenvalues λ p of the LGFT.
The Chebyshev polynomials are defined by
T 0 ( z ) = 1 , T 1 ( z ) = z , , T m ( z ) = 2 z T m 1 ( z ) T m 1 ( z ) ,
for m 2 and 1 z 1 .
The mapping T ¯ m ( λ ) = T m ( 2 λ / λ max 1 ) is introduced to transform the argument from 0 λ λ max to 1 to 1. Then, the Chebyshev polynomials of the finite ( M 1 ) -order can be written as follows
P ¯ k , M 1 ( λ ) = c k , 0 2 + m = 1 M 1 c k , m T ¯ m ( λ ) ,
where the polynomial coefficients are calculated using the Chebyshev polynomial inversion property as
c k , m = 2 π 1 1 H k ( ( z + 1 ) λ max / 2 ) T m ( z ) d z 1 z 2 .
Based on the previous definitions, the vertex-domain implementation (39) of the spectral LGFT form, can be now written as follows
s k = P ¯ k , M 1 ( L ) x , k = 0 , 1 , 2 , , K 1 ,
with
P ¯ k , M 1 ( L ) = c k , 0 2 + m = 1 M 1 c k , m T ¯ m ( L ) , = h 0 , k I + h 1 , k L + h 2 , k L 2 + + h ( M 1 ) , k L M 1 .
In the calculation of the polynomial form of the transfer functions in (41), the ( M 1 ) —neighborhood is only used to obtain the LGFT for every vertex, n. This form does not employ the eigendecomposition analysis over the whole graph, in any way. Therefore, the computational complexity for large graphs is feasible.
Example 12.
The Chebyshev polynomial approximation approach will be illustrated on a set of the transfer functions, H k ( λ ) , defined in (28) and (29). For K = 15 , these transfer function are shown in Figure 9a. The transfer functions H k ( λ ) satisfy OLA condition, k = 0 K 1 H k ( λ ) = 1 . We used the Chebyshev polynomial, P ¯ k , M 1 , k = 0 , 1 , , K 1 , given by (40), to approximate each individual transfer function, H k ( λ ) . Two polynomial orders are considered for the approximation, M = 20 , M = 50 . The resulting Chebyshev polynomial approximations of the transfer functions shown in Figure 9a are given in Figure 9d when the polynomial order was M = 20 . In order to show the compliance of the obtained approximation with the imposed OLA condition, the value of k = 0 K 1 P ¯ k , M 1 ( λ ) is presented in the figure. This summation value is depicted by the dotted line in Figure 9d. As it can be seen, these values are close to unity. This means that the signal reconstruction from the LGFT calculated using the presented polynomial approximation will be stable and accurate.
The Chebyshev polynomial approximations of H k ( λ ) , calculated in the way presented here, are applied to obtain the vertex–frequency analysis the signal from Example 1, using the LGFT. Time–frequency representations of the harmonic signal from Example 1 are shown for both polynomial orders, for M = 20 in Figure 10e and for M = 50 in Figure 10f. As it can be seen from Figure 10d, the representations with a lower order polynomial approximation, when M = 20 , is less concentrated than representation in Figure 10e obtained for M = 50 . However, using higher orders, ( M 1 ) , of the polynomial approximation increases calculation complexity, since wider neighborhoods are required in the LGFT calculation. The experiment is repeated for the graph signal from Example 1. The two considered sets of Chebyshev polynomial-based approximations of bandpass transfer functions H k ( λ ) , k = 0 , 1 , , K 1 , from Figure 9a are now used in the calculation of vertex–frequency representations from Figure 11e,f, for M = 20 and M = 40 , respectively.
Example 13.
In order to present the Chebyshev polynomial approximation in more detail, and give the exact values of the approximation coefficients we further reduced the approximation order to ( M 1 ) = 5 . Then we used this order to calculate the approximations of the bandpass functions, H k ( λ ) , for every k in the case of the raised cosine form, given in (28), with K = 10 bands. The resulting approximation coefficients, h i , k , which are used in the vertex-domain implementation, defined by (39), are shown in Table 1.

5.2. Least Squares Approximation

Bandpass transfer functions H k ( λ ) , used in the calculation of vertex–frequency (time–frequency) representations, can be approximated using polynomial
P k , M 1 L S ( λ ) = α ¯ 0 , k + α ¯ 1 , k λ + + α ¯ M 1 , k λ M 1 ,
such that squared error
0 λ max | H k ( λ ) P k , M 1 L S ( λ ) | 2 d λ
is minimized. This approximation will be referred to as Least Squares (LS) approximation. As in the case of Chebyshev approximation, the interval 0 λ λ max is normalized to 1 , 1 , to ensure the standard calculation procedure. This is achieved using the substitution z = ( 2 λ λ max ) / λ max . Upon introducing the following variables
s m = 1 1 z m d z , m = 0 , 1 , , 2 M 2 ,
and
b m = 1 1 z m H k ( ( z + 1 ) λ max / 2 ) d z ,
m = 0 , 1 , , M 1 , we obtain
s 0 s 1 s M 1 s 1 s 2 s M s M 1 s M s 2 M 2 α 0 , k α 1 , k α M 1 , k = b 0 b 1 b M 1 .
The matrix form of the previous relation is Sa = b . When this linear system of equations is solved, the approximation coefficients α 0 , k , α 1 , k , , α M 1 , k are obtained. With λ = 0.5 ( z + 1 ) λ max we further have
m = 0 M 1 α m , k z m = m = 0 M 1 α ¯ i , k λ m .
The approximation is then
P k , M 1 L S ( L ) = α ¯ 0 , k I + α ¯ 1 , k L + α ¯ 2 , k L 2 + + α ¯ ( M 1 ) , k L M 1 .
The vertex-domain implementation (39) of the spectral LGFT form, based on this approximation is performed according to
s k = P ¯ k , M 1 L S ( L ) x , for every k = 0 , 1 , , K 1 .

5.3. Legendre Polynomial

The least squares approximation using Legendre polynomials assumes minimization of
0 λ max | H k ( λ ) P k , M 1 L e g ( λ ) | 2 d λ ,
where
P k , M 1 L e g ( λ ) = β ¯ 0 , k ϕ 0 ( λ ) + β ¯ 1 , k ϕ 1 ( λ ) + + β ¯ M 1 , k ϕ M 1 ( λ ) .
Polynomials
ϕ 0 ( z ) = 1 , ϕ 1 ( z ) = z , ϕ 2 ( z ) = z 2 1 / 3 ,
are referred to as Legendre polynomials. These polynomials satisfy the so-called Bonnet’s recursive relation
( m + 1 ) ϕ m + 1 ( z ) = ( 2 m + 1 ) z ϕ m ( z ) m ϕ m 1 ( z ) .
This case also assumes the normalization and shift of interval 0 λ λ max , to achieve the mapping to 1 , 1 . This is performed with z = ( 2 λ λ max ) / λ max to obtain
ϕ ¯ m ( λ ) = ϕ m ( 2 λ / λ max 1 ) .
For each m = 0 , 1 , , M 1 coefficients of the form
C m = 1 1 ϕ ¯ m 2 ( z ) d z ,
are calculated, and are further used to obtain the polynomial coefficients of the form
β m = 1 C m 1 1 H k ( ( z + 1 ) λ max / 2 ) ϕ ¯ m ( z ) d z .
As with λ = 0.5 ( z + 1 ) λ max , relation
m = 0 M 1 β m , k ϕ m ( z ) = m = 0 M 1 β ¯ i , k λ m ,
holds, in analogy with previous cases, we obtain the following approximation
P k , M 1 L e g ( L ) = β ¯ 0 , k I + β ¯ 1 , k L + β ¯ 2 , k L 2 + + β ¯ ( M 1 ) , k L M 1 ,
serving as a basis for the implementation of vertex–frequency analysis using the spectral LGFT form in (39) according to
s k = P ¯ k , M 1 L e g ( L ) x , for every k = 0 , 1 , , K 1 .
Example 14.
The Legendre polynomial approximation will be illustrated on the transfer functions, H k ( λ ) , defined by (28) and (29) for every k = 0 , 1 , , K 1 with K = 15 . Recall that functions H k ( λ ) satisfy k = 0 K 1 H k ( λ ) = 1 . In this example, we consider approximations of these functions using LS approximation (42), as well as the approximations based on Legendre polynomial (44). For convenience, we also consider the approximation based on Chebyshev polynomial.
To illustrate how the polynomial order influences the convergence of the approximations, we consider three orders of polynomials: M = 12 , M = 20 , and M = 40 . Shifted spectral transfer functions H k ( λ ) , k = 0 , 1 , , K 1 , which are being approximated, are shown in Figure 14a, Figure 15a, and Figure 16a. The approximations based on the Chebyshev polynomial are shown in Figure 14b, Figure 15b, and Figure 16b, for the considered polynomial orders. The approximations using the Legendre polynomial are shown in Figure 14c, Figure 15c, and Figure 16c, while the LS approximations are shown in Figure 14d, Figure 15d and Figure 16d, also for the three considered polynomial orders. It can be seen that even with M = 12 , the Chebyshev and LS based approximations are sufficiently narrow to enable clear distinction between the various spectral bands. The approximations using Legendre polynomials are shown to be less convenient for this purpose.
The fact that the order of polynomials as low as M = 12 can be used in the calculation of time–frequency representations and vertex–frequency representations is indicated in Figure 17. Polynomial approximations from Figure 14b–d are used in the calculation of time–frequency representations for the harmonic signal from Example 1. As shown in Figure Figure 17a,c,e, signal components are clearly distinguishable in representations obtained based on approximated band functions. The experiment was repeated for the graph signal from Example 1, and the obtained representations are presented in Figure Figure 17b,d,f, for the spectral bandpass functions approximated using the considered polynomials: Chebyshev, Legendre, and LS. We can conclude that when a higher polynomial order M is used in the approximation, it increases the LGFT calculation complexity since it uses a wider neighborhood of the considered vertex.

6. Inversion of the LGFT

Two approaches to the inversion of the classical STFT are used. One is based on the summation of the STFT values (overlap-and-add approach) and the other uses the weighted STFT values for the reconstruction (weighted overlap-and-add approach). These two approaches will be used in the vertex–frequency analysis as well.

6.1. Inversion by Summation (OLA Method)

For the LGFT, defined by (27) or in a polynomial form by (39), as
s k = H k ( L ) x , or s k = p = 0 M 1 h p , k L p x ,
the signal can be reconstructed by a summation over all spectral shifts
k = 0 K 1 s k = k = 0 K 1 p = 0 M 1 h p , k L p x = k = 0 K 1 H k ( L ) x = x .
This relation holds when the OLA condition k = 0 K 1 H k ( L ) = I holds.
The spectral-domain form of this condition is given by
k = 0 K 1 H k ( Λ ) = I
since we may write k = 0 K 1 H k ( L ) = U k = 0 K 1 H k ( Λ ) U T = I and the fact that U T U = I holds for a symmetric matrix L . This condition used when the transfer functions in Figure 5a are defined.
The element-wise form of the inversion relation (46) is
x ( n ) = k = 0 K 1 S ( n , k ) .

6.2. Kernel-Based LGFT Inversion (WOLA Method—Frames)

Another common approach to the inversion in classical time–frequency analysis and wavelet transforms is based on the Gabor (frames) expansions [12]. When applied to the vertex–frequency analysis, it means that the signal is recovered from the LGFT, S ( m , k ) , S ( m , k ) , by projected back the LGFT values to the vertex–frequency kernels, H m , k ( n ) . It can be considered as the WOLA reconstruction.
If the LGFT calculated by employing the spectral shifted transfer functions, defined by (17) and (18), the Gabor framework based inversion is obtained as
x ( n ) = m = 1 N k = 0 K 1 S ( m , k ) H m , k ( n ) ,
if the following condition
k = 0 K 1 H k 2 ( λ p ) = 1 ,
is satisfied for all λ p , p = 1 , 2 , , N .
The formula for the signal reconstruction (inversion formula) in (48) follows, when the condition in (49) is satisfied. This conclusion is obtained from the analysis
m = 1 N k = 0 K 1 S ( m , k ) H m , k ( n ) = m = 1 N k = 0 K 1 p = 1 N X ( p ) H k ( λ p ) u p ( m ) l = 1 N H k ( λ l ) u l ( m ) u l ( n ) .
Having in mind that the eigenvalues are orthonormal, m = 1 N u p ( m ) u l ( m ) = δ ( p l ) , we obtain the graph signal, x ( n ) , from
k = 0 K 1 p = 1 N X ( p ) H k ( λ p ) H k ( λ p ) u p ( n ) = x ( n ) .
The condition is that the transfer functions, H k ( λ p ) , satisfy the WOLA condition in (49) for every λ p .
The inversion formula for the spectral form of the wavelet transform is just a special case of the varying LGFT analysis. It follows from (48), with the WOLA condition given by (49), taking into account the wavelet transform notation of the transfer functions
G 2 ( λ p ) + i = 1 K 1 H 2 ( s i λ p ) = 1 ,
The set of discrete scales, denoted by s { s 1 , s 2 , , s K 1 } , is used for the wavelet transform calculation. The corresponding spectral transfer functions are H ( s i λ p ) , i = 1 , 2 , , K 1 . The father wavelet (low-pass scale function) is denoted by G ( λ p ) . It plays the role of the low-pass function in the LGFT, H 0 ( λ p ) .
Vertex-Varying Filtering. For the vertex-varying filtering of the graph signals using the vertex–frequency representation we can use a support function B ( m , k ) in the vertex–frequency domain. Then, the signal that is filtered in the vertex–frequency using the LGFT is obtained as S f ( m , k ) = S ( m , k ) B ( m , k ) .
The signal that is filtered by the support function B ( m , k ) and denoted by x f ( n ) , is obtained using the inversion of S f ( m , k ) . In the inversion we can use either OLA or WOLA inversion method depending on the condition satisfied by the transfer functions.
The simplest way of the filtering support function, B ( m , k ) , follows by hard-thresholding the noisy values of the calculated vertex–frequency representation, S ( m , k ) .

7. Support Uncertainty Principle in the LGFT

In the classical time–frequency analysis, the window function is used to localize signal in the joint time–frequency domain. As it is known, the uncertainty principle prevents an ideal simultaneous localization in the time and the frequency domains.
The uncertainty principle is defined in various forms. For a survey see [39,40,41,42]. Concentration measures, reviewed in [37], are closely related to all forms of uncertainty principle. The product of the effective signal widths in the time and the frequency domain is the basis for the common uncertainty principle form used in the time–frequency signal analysis [13,43]. In quantum mechanics, this form is also known as the Robertson–Schrödinger inequality. In general signal theory, the most commonly used form is the support uncertainty principle, closely related to the sparsity support measure [37,42]. In the classical signal analysis, the support uncertainty principle relates the discrete signal x , and its DFT, X , as follows
1 4 x 0 + X 0 2 x 0 X 0 N .
In other words, the product of the number of nonzero signal values, x 0 , and the number of its nonzero DFT coefficients, X 0 , is greater or equal than the total number of signal samples N. This form of uncertainty principle will be generalized to the LGFT. The LGFT form shall produce the classical support uncertainty principle as a special case.
The support uncertainty principle in the LGFT can be derived in the same way as in the case of the graph Fourier transform. A simple derivation procedure, as in [44,45], will be followed. If a function
F p ( n , k ) = X ( k ) H p ( λ k ) S ( n , p ) u k ( n ) ,
is formed, then its sum over indices n and k,
n k F p ( n , k ) = n k X ( k ) H p ( λ k ) S ( n , p ) u k ( n ) = n S 2 ( n , p ) = k X 2 ( k ) H p 2 ( λ k ) = E p ,
is equal to the energy of the LGFT, S ( n , p ) , for the given frequency band indexed by p. Assume next that the number of nonzero values of S ( n , p ) , for a given p, is equal to | | s p | | 0 , while the number of nonzero values of the spectral localized signal X ( k ) H p ( λ k ) is | | XH p | | 0 . Using the Schwartz inequality
E p 2 = n k X ( k ) H p ( λ k ) S ( n , p ) u k ( n ) 2 n k X ( k ) H p ( λ k ) ( S ( n , p ) 2 n k u k 2 ( n ) ,
we obtain
1 n k u k 2 ( n ) | | s p | | 0 | | XH p | | 0 μ 2 ,
since n k X ( k ) H p ( λ k ) ( S ( n , p ) 2 = k X ( k ) H p ( λ k ) 2 n S 2 ( n , p ) = E p 2 , and
μ = max n , k { | u k ( n ) | } ,
while the summation in n k u k 2 ( n ) is performed over nonzero values of X ( k ) H p ( λ k ) S ( n , p ) , meaning that n k u k 2 ( n ) | | s p | | 0 | | XH p | | 0 μ 2 .
Therefore, the support uncertainty principle assumes the following form
1 4 | | s p | | 0 + | | XH p | | 0 2 | | s p | | 0 | | XH p | | 0 1 μ 2 .
The first inequality is written based on the general property that the arithmetic mean is greater than geometric mean of positive numbers.
For the standard DFT, with μ = | u k ( n ) | = 1 / N and H k ( λ p ) = 1 , when S ( n , p ) = x ( n ) , we easily obtain the classical uncertainty principle (52).
Having in mind that H p ( λ k ) is band-limited to Q p nonzero samples, then
| | XH p | | 0 Q p ,
meaning that the support of the LGFT satisfies
| | s p | | 0 1 Q p μ 2 .
The smallest possible number of nonzero samples in the LGFT is defined by
| | s p | | 0 1 Q μ 2 ,
where Q = max p { Q p } .
For example, if the classical Fourier analysis is considered then the maximal absolute eigenvector value is μ = max n , k { | u k ( n ) | } = max n , k { | exp ( j 2 π n k / N ) / N | } = N and
| | s p | | 0 N / Q .
If we select just one spectral frequency by a bandpass filter, with Q = 1 , then the duration of the S ( n , p ) must be N. If half of the spectral band is selected by the bandpass function, Q = N / 2 , then | | s p | | 0 2 . Finally, if all spectral components are used, then a delta pulse is possible in the time domain, that is, for Q = N , we can have | | s p | | 0 1 .

8. Analysis Based on Splitting Large Signals

The graph analysis suggested a polynomial approximation of the transfer functions and the implementation of vertex–frequency analysis using powers of the Laplacian applied to signal. This means that the neighborhood of the considered sample, defined by the power of the Laplacian, is used. In classical signal analysis, this problem was approached by windowing a large signal and then by splitting the analysis into smaller nonoverlapping or overlapping time segments. This idea will be now generalized to graph signals, which may then be used to define more general forms of signal splitting in the classical analysis.
Assume that the signal, x ( n ) , and its domain of N vertices (with even N), is split into two nonoverlapping segments (subsets) with N / 2 vertices in each subdomain. Without loss of generality assume that the vertex indices for the first half of the signal samples are from 1 to N / 2 and that the vertices for the second half of the signal samples are indexed with remaining values, N / 2 + 1 to N. Then we can write
X 0 = U 1 [ x ( 1 ) , x ( 2 ) , , x ( N / 2 ) , 0 , 0 , , 0 ] T = U 1 x 0 , X 1 = U 1 [ 0 , 0 , , 0 , x ( N / 2 + 1 ) , x ( N / 2 + 2 ) , , x ( N ) ] T = U 1 x 1 , X = X 0 + X 1 .
For undirected graphs U 1 = U T holds. From the inverse GFT, x 0 = U X 0 and x 1 = U X 1 , having in mind the positions of the zero values in x 0 and x 1 , we obtain
0 = U L o X 0 , 0 = U U p X 1 ,
where U L o and U U p are the lower and upper parts of the matrix U . They consist of the rows of U corresponding to the zero-value positions of the signal x 0 and x 1 , respectively. Splitting now the transform vector X 0 in its even-indexed part, ( X 0 ) E v e n , and odd-indexed part, ( X 0 ) O d d , and doing the same for the transform vector X 1 , we obtain
0 = U L o , E v e n ( X 0 ) E v e n + U L o , O d d ( X 0 ) O d d , 0 = U U p , E v e n ( X 1 ) E v e n + U U p , O d d ( X 1 ) O d d .
This means that there is no need to calculate the full GFT. It is sufficient to calculate the GFT of order N / 2 , and its values ( X 0 ) E v e n and ( X 1 ) E v e n or ( X 0 ) O d d and ( X 1 ) O d d . The remaining parts of the transform vectors could be obtained from (60). This can reduce the problem dimensionality.
The algorithm steps for this approach to the GFT calculation are as follows:
  • Calculate even-indexed elements of X 0 and X 1 , corresponding to two halves of the signal samples, as
    ( X 0 ) E v e n = ( U 1 ) U p , E v e n [ x ( 1 ) , x ( 2 ) , , x ( N / 2 ) ] T = ( U 1 ) U p , E v e n x U p , ( X 1 ) E v e n = ( U 1 ) L o , E v e n [ x ( N / 2 + 1 ) , x ( N / 2 + 2 ) , , x ( N ) ] T = ( U 1 ) L o , E v e n x L o .
  • Find the odd-indexed elements of the GFT using (60) as
    ( X 0 ) O d d = ( U L o , O d d ) 1 U L o , E v e n ( X 0 ) E v e n , ( X 1 ) O d d = ( U U p , O d d ) 1 U U p , E v e n ( X 1 ) E v e n .
  • Reconstruct the GFT elements of the whole signal,
    ( X ) E v e n = ( X 0 ) E v e n + ( X 1 ) E v e n ( X ) O d d = ( X 0 ) O d d + ( X 1 ) O d d .
    Notice that all matrices used in this relation are of size N / 2 × N / 2 , while all vectors are of size N / 2 × 1 .
This approach can be applied to different splitting schemes (for example, we can split the signal into even and odd indexed samples, and then split the transform elements into upper and lower part). The same procedure can be used for splitting the signal into not equal sets of samples. The case with the overlapping windowed signal can easily be split into nonoverlapping problems. For example, if the window in the classical domain overlaps for half of the window width, then the problem can be separated into two sets of nonoverlapping windows since every other window does not overlap [46,47,48,49,50,51].

9. Conclusions

Time–frequency analysis is a basis for extending the classical concepts to the vertex-varying spectral analysis of signals on graphs. Attention has been paid to linear signal transformations as the most important forms in classical signal analysis and graph signal processing. The spectral domain of these representations has been considered in detail since it provides an opportunity for a direct generalization of the well-developed time–frequency approaches to vertex–frequency analysis. Various polynomial forms are used in the implementation since they can be computationally very efficient in the case of very large graphs. The polynomial forms are developed in detail in graph signal processing, and can then be used in classical time–frequency analysis, with their simplicity being attractive for the implementation in the case of large time-domain signals. Reconstruction of the graph signals from the vertex–frequency representation has been reviewed, with some practical notes on the filtering, optimal parameter selection, uncertainty principle, and schemes for large signal division into smaller parts. All results are illustrated by numerous numerical examples.

Author Contributions

Conceptualization, L.S. and M.D.; methodology, L.S., J.L., D.M., M.B., C.R. and M.D.; software, L.S., M.B. and M.D.; validation, L.S., J.L., D.M., M.B., C.R. and M.D.; formal analysis, L.S., J.L., D.M., C.R. and. M.D.; investigation, L.S., J.L., D.M., M.B. and M.D.; resources, D.M., M.B. and C.R.; data curation, L.S., J.L., D.M, M.B. and M.D.; writing—original draft preparation, L.S., J.L., D.M., M.B. and M.D.; writing—review and editing, L.S., J.L., D.M., M.B., C.R. and M.D.; visualization, L.S., D.M., M.B. and M.D.; supervision, L.S., J.L., C.R. and M.D.; project administration, L.S. and J.L; funding acquisition, L.S., J.L. and M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Croatian Science Foundation under the project IP-2018-01-3739, IRI2 project “ABsistemDCiCloud” (KK.01.2.1.02.0179), and the University of Rijeka under the projects uniri-tehnic-18-17 and uniri-tehnic-18-15.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sandryhaila, A.; Moura, J.M. Discrete signal processing on graphs. IEEE Trans. Signal Process. 2013, 61, 1644–1656. [Google Scholar] [CrossRef] [Green Version]
  2. Chen, S.; Varma, R.; Sandryhaila, A.; Kovačević, J. Discrete Signal Processing on Graphs: Sampling Theory. IEEE Trans. Signal Process. 2015, 63, 6510–6523. [Google Scholar] [CrossRef] [Green Version]
  3. Sandryhaila, A.; Moura, J.M. Discrete Signal Processing on Graphs: Frequency Analysis. IEEE Trans. Signal Process. 2014, 62, 3042–3054. [Google Scholar] [CrossRef] [Green Version]
  4. Ortega, A.; Frossard, P.; Kovačević, J.; Moura, J.M.; Vandergheynst, P. Graph signal processing: Overview, challenges, and applications. Proc. IEEE 2018, 106, 808–828. [Google Scholar] [CrossRef] [Green Version]
  5. Djuric, P.; Richard, C. (Eds.) Cooperative and Graph Signal Processing: Principles and Applications; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  6. Hamon, R.; Borgnat, P.; Flandrin, P.; Robardet, C. Extraction of temporal network structures from graph-based signals. IEEE Trans. Signal Inf. Process. Netw. 2016, 2, 215–226. [Google Scholar] [CrossRef]
  7. Marques, A.; Ribeiro, A.; Segarra, S. Graph Signal Processing: Fundamentals and Applications to Diffusion Processes. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017. [Google Scholar]
  8. Quinn, C.J.; Kiyavash, N.; Coleman, T.P. Directed information graphs. IEEE Trans. Inf. Theory 2015, 61, 6887–6909. [Google Scholar] [CrossRef] [Green Version]
  9. Raginsky, M.; Jafarpour, S.; Harmany, Z.T.; Marcia, R.F.; Willett, R.M.; Calderbank, R. Performance bounds for expander-based compressed sensing in Poisson noise. IEEE Trans. Signal Process. 2011, 59, 4139–4153. [Google Scholar] [CrossRef] [Green Version]
  10. Hamon, R.; Borgnat, P.; Flandrin, P.; Robardet, C. Transformation from Graphs to Signals and Back. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 111–139. [Google Scholar]
  11. Sandryhaila, A.; Moura, J.M. Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Process. Mag. 2014, 31, 80–90. [Google Scholar] [CrossRef]
  12. Stanković, L.; Daković, M.; Thayaparan, T. Time-Frequency Signal Analysis with Applications; Artech House: London, UK, 2014. [Google Scholar]
  13. Cohen, L. Time-Frequency Analysis; Prentice Hall PTR: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
  14. Boashash, B. Time-Frequency Signal Analysis and Processing: A Comprehensive Reference; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
  15. Shuman, D.I.; Ricaud, B.; Vandergheynst, P. Vertex-frequency analysis on graphs. Appl. Comput. Harmon. Anal. 2016, 40, 260–291. [Google Scholar] [CrossRef]
  16. Shuman, D.I.; Ricaud, B.; Vandergheynst, P. A windowed graph Fourier transform. In Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), Ann Arbor, MI, USA, 5–8 August 2012; pp. 133–136. [Google Scholar]
  17. Zheng, X.W.; Tang, Y.Y.; Zhou, J.T.; Yuan, H.L.; Wang, Y.L.; Yang, L.N.; Pan, J.J. Multi-windowed graph Fourier frames. In Proceedings of the IEEE International Conference on Machine Learning and Cybernetics (ICMLC), Ningbo, China, 9–12 July 2016; Volume 2, pp. 1042–1048. [Google Scholar]
  18. Tepper, M.; Sapiro, G. A short-graph Fourier transform via personalized pagerank vectors. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 4806–4810. [Google Scholar]
  19. Stanković, L.; Daković, M.; Sejdić, E. Vertex-Frequency Analysis: A Way to Localize Graph Spectral Components [Lecture Notes]. IEEE Signal Process. Mag. 2017, 34, 176–182. [Google Scholar] [CrossRef]
  20. Cioacă, T.; Dumitrescu, B.; Stupariu, M.S. Graph-Based Wavelet Multiresolution Modeling of Multivariate Terrain Data. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 479–507. [Google Scholar]
  21. Hammond, D.K.; Vandergheynst, P.; Gribonval, R. The Spectral Graph Wavelet Transform: Fundamental Theory and Fast Computation. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 141–175. [Google Scholar]
  22. Behjat, H.; Van De Ville, D. Spectral Design of Signal-Adapted Tight Frames on Graphs. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 177–206. [Google Scholar]
  23. Shuman, D.I.; Narang, S.K.; Frossard, P.; Ortega, A.; Vandergheynst, P. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Mag. 2013, 30, 83–98. [Google Scholar] [CrossRef] [Green Version]
  24. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part I: Graphs and Spectra on Graphs. Found. Trends Mach. Learn. 2020, 13, 1–157. [Google Scholar] [CrossRef]
  25. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part II: Signals on Graphs. Found. Trends Mach. Learn. 2020, 13, 158–331. [Google Scholar] [CrossRef]
  26. Stanković, L.; Mandic, D.; Daković, M.; Brajović, M.; Scalzo, B.; Li, S.; Constantinides, A.G. Data Analytics on Graphs Part III, Machine Learning on Graphs, from Graph Topology to Applications. Found. Trends Mach. Learn. 2020, 13, 332–530. [Google Scholar] [CrossRef]
  27. Gray, R.M. Toeplitz and Circulant Matrices: A Review; NOW Publishers: Delft, The Netherlands, 2006. [Google Scholar]
  28. Dakovic, M.; Stankovic, L.J.; Sejdic, E. Local Smoothness of Graph Signals. Math. Probl. Eng. 2019, 2019, 3208569. [Google Scholar] [CrossRef]
  29. Hammond, D.K.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. 2011, 30, 129–150. [Google Scholar] [CrossRef] [Green Version]
  30. Stanković, L.; Mandic, D.; Daković, M.; Kisil, I.; Sejdić, E.; Constantinides, A.G. Understanding the Basis of Graph Signal Processing via an Intuitive Example-Driven Approach. IEEE Signal Process. Mag. 2019, 36, 133–145. [Google Scholar] [CrossRef]
  31. Stanković, L.; Mandic, D.; Daković, M.; Scalzo, B.; Brajović, M.; Sejdić, E.; Constantinides, A.G. Vertex-frequency graph signal processing: A comprehensive review. Digit. Signal Process. 2020, 107, 102802. [Google Scholar] [CrossRef]
  32. Leonardi, N.; Van De Ville, D. Tight wavelet frames on multislice graphs. IEEE Trans. Signal Process. 2013, 61, 3357–3367. [Google Scholar] [CrossRef]
  33. Behjat, H.; Leonardi, N.; Sörnmo, L.; Van De Ville, D. Anatomically-adapted Graph Wavelets for Improved Group-level fMRI Activation Mapping. NeuroImage 2015, 123, 185–199. [Google Scholar] [CrossRef] [PubMed]
  34. Rustamov, R.; Guibas, L.J. Wavelets on graphs via deep learning. In Proceedings of the 27th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013; pp. 998–1006. [Google Scholar]
  35. Jestrović, I.; Coyle, J.L.; Sejdić, E. A fast algorithm for vertex-frequency representations of signals on graphs. Signal Process. 2017, 131, 483–491. [Google Scholar] [CrossRef] [Green Version]
  36. Masoumi, M.; Rezaei, M.; Hamza, A.B. Shape Analysis of Carpal Bones Using Spectral Graph Wavelets. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 419–436. [Google Scholar]
  37. Stanković, L. A measure of some time-frequency distributions concentration. Signal Process. 2001, 81, 621–631. [Google Scholar] [CrossRef]
  38. Brajović, M.; Stanković, L.; Daković, M. On Polynomial Approximations of Spectral Windows in Vertex-Frequency Representations. In Proceedings of the 24th International Conference on Information Technology, Žabljak, Montenegro, 18–22 February 2020. [Google Scholar]
  39. Pasdeloup, B.; Gripon, V.; Alami, R.; Rabbat, M.G. Uncertainty principle on graphs. In Vertex-Frequency Analysis of Graph Signals; Springer: Berlin/Heidelberg, Germany, 2019; pp. 317–340. [Google Scholar]
  40. Perraudin, N.; Ricaud, B.; Shuman, D.I.; Vandergheynst, P. Global and local uncertainty principles for signals on graphs. APSIPA Trans. Signal Inf. Process. 2018, 7, 1–26. [Google Scholar] [CrossRef] [Green Version]
  41. Tsitsvero, M.; Barbarossa, S.; Di Lorenzo, P. Signals on graphs: Uncertainty principle and sampling. IEEE Trans. Signal Process. 2016, 64, 539–554. [Google Scholar] [CrossRef] [Green Version]
  42. Ricaud, B.; Tottesani, B. A survey of uncertainty principles and some signal processing applications. Adv. Comput. Math. 2014, 40, 629–650. [Google Scholar] [CrossRef] [Green Version]
  43. Stankovic, L. Highly concentrated time-frequency distributions: Pseudo quantum signal representation. IEEE Trans. Signal Process. 1997, 45, 543–551. [Google Scholar] [CrossRef] [Green Version]
  44. Stanković, L. The Support Uncertainty Principle and the Graph Rihaczek Distribution: Revisited and Improved. IEEE Signal Process. Lett. 2020, 27, 1030–1034. [Google Scholar] [CrossRef]
  45. Elad, M.; Bruckstein, A.M. Generalized uncertainty prin- ciple and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 2002, 48, 2558–2567. [Google Scholar] [CrossRef] [Green Version]
  46. Stanković, L. Digital Signal Processing with Selected Topics; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015; ISBN 978-1514179987. [Google Scholar]
  47. Stanković, L.; Sejdić, E.; Daković, M. Vertex-frequency energy distributions. IEEE Signal Process. Lett. 2018, 25, 358–362. [Google Scholar] [CrossRef]
  48. Stanković, L.; Sejdić, E.; Daković, M. Reduced interference vertex-frequency distributions. IEEE Signal Process. Lett. 2018, 25, 1393–1397. [Google Scholar] [CrossRef]
  49. Agaskar, A.; Lu, Y.M. A spectral graph uncertainty principle. IEEE Trans. Inf. Theory 2013, 59, 4338–4356. [Google Scholar] [CrossRef]
  50. Sakiyama, A.; Tanaka, Y. Oversampled graph Laplacian matrix for graph filter banks. IEEE Trans. Signal Process. 2014, 62, 6425–6437. [Google Scholar] [CrossRef]
  51. Girault, B. Stationary graph signals using an isometric graph translation. In Proceedings of the 2015 23rd IEEE European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1516–1520. [Google Scholar]
Figure 1. Time domain of periodic signals presented as: Circular unweighted directed graph (left) and an undirected graph (right), with N = 8 vertices (instants).
Figure 1. Time domain of periodic signals presented as: Circular unweighted directed graph (left) and an undirected graph (right), with N = 8 vertices (instants).
Mathematics 09 01407 g001
Figure 2. A circular undirected unweighted graph as the domain for classical signal analysis. Each of N = 100 vertices (instants) is connected to the predecessor and successor vertices (top). A general form of a graph, with N = 100 vertices (bottom).
Figure 2. A circular undirected unweighted graph as the domain for classical signal analysis. Each of N = 100 vertices (instants) is connected to the predecessor and successor vertices (top). A general form of a graph, with N = 100 vertices (bottom).
Mathematics 09 01407 g002
Figure 3. Graph signal on a circular undirected unweighted graph (top), and a general graph (bottom). Vertices from V 1 are designated blue dots, vertices form V 2 are marked by black dots, while vertices form V 3 are given by green dots.
Figure 3. Graph signal on a circular undirected unweighted graph (top), and a general graph (bottom). Vertices from V 1 are designated blue dots, vertices form V 2 are marked by black dots, while vertices form V 3 are given by green dots.
Mathematics 09 01407 g003
Figure 4. Local smoothness of the signals from Figure 3. The values are shown for nonzero signal samples. The local smoothness in classical signal analysis is related to the instantaneous frequency as λ ( n ) = 4 sin 2 ( ω ( n ) / 2 ) .
Figure 4. Local smoothness of the signals from Figure 3. The values are shown for nonzero signal samples. The local smoothness in classical signal analysis is related to the instantaneous frequency as λ ( n ) = 4 sin 2 ( ω ( n ) / 2 ) .
Mathematics 09 01407 g004
Figure 5. The spectral domain transfer functions H k ( λ p ) , for a circular undirected and unweighted graph (classical analysis), p = 1 , 2 , , N , k = 0 , 1 , , K 1 , that correspond to the terms of the binomial form for K = 26 .
Figure 5. The spectral domain transfer functions H k ( λ p ) , for a circular undirected and unweighted graph (classical analysis), p = 1 , 2 , , N , k = 0 , 1 , , K 1 , that correspond to the terms of the binomial form for K = 26 .
Mathematics 09 01407 g005
Figure 6. The spectral domain transfer functions H k ( λ p ) , for a general graph p = 1 , 2 , , N , k = 0 , 1 , , K 1 , that correspond to the terms of the binomial form for K = 26 .
Figure 6. The spectral domain transfer functions H k ( λ p ) , for a general graph p = 1 , 2 , , N , k = 0 , 1 , , K 1 , that correspond to the terms of the binomial form for K = 26 .
Mathematics 09 01407 g006
Figure 7. Time–frequency and vertex–frequency representations of the signals from Example 1: (a) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the transfer functions from Figure 5 (bottom). (b) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 6 (bottom). (c) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 5 (bottom). The complex signal is formed by adding two corresponding sine and cosine components. In all cases, the original representation is given on the left panel, while the reassigned value to the position of the distribution maximum is given on the right panel.
Figure 7. Time–frequency and vertex–frequency representations of the signals from Example 1: (a) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the transfer functions from Figure 5 (bottom). (b) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 6 (bottom). (c) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 5 (bottom). The complex signal is formed by adding two corresponding sine and cosine components. In all cases, the original representation is given on the left panel, while the reassigned value to the position of the distribution maximum is given on the right panel.
Mathematics 09 01407 g007aMathematics 09 01407 g007b
Figure 8. Transfer functions in the spectral eigenvalue and frequency domains for classical analysis: (top) The eigenvalue spectral domain transfer functions H k ( λ p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , for K = 15 and 0 λ λ max = 4 . (down) The frequency spectral domain transfer functions H k ( ω p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , for K = 15 and 0 ω π . Horizontal axis represents the continuous variable λ and discrete values λ p , p = 1 , 2 , , N , corresponding to the eigenvalues and denoted by gray dots along the axis. The same notation is used for frequency ω and its discrete values ω p that correspond to λ p .
Figure 8. Transfer functions in the spectral eigenvalue and frequency domains for classical analysis: (top) The eigenvalue spectral domain transfer functions H k ( λ p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , for K = 15 and 0 λ λ max = 4 . (down) The frequency spectral domain transfer functions H k ( ω p ) , p = 1 , 2 , , N , k = 0 , 1 , , K 1 , for K = 15 and 0 ω π . Horizontal axis represents the continuous variable λ and discrete values λ p , p = 1 , 2 , , N , corresponding to the eigenvalues and denoted by gray dots along the axis. The same notation is used for frequency ω and its discrete values ω p that correspond to λ p .
Mathematics 09 01407 g008
Figure 9. Transfer functions in the spectral domain. (a) The transfer functions corresponding to the Hann form terms for K = 15. (b) The spectral index-varying (wavelet-like) transfer functions whose terms are of half-cosine form, with K = 11. (c) The spectral domain signal adaptive transfer functions with K = 17. (d) Approximations of transfer functions from panel (a) using Chebyshev polynomials, with H9(λ) being designated by the thick black lines, whereas gray markers indicate the corresponding discrete values.
Figure 9. Transfer functions in the spectral domain. (a) The transfer functions corresponding to the Hann form terms for K = 15. (b) The spectral index-varying (wavelet-like) transfer functions whose terms are of half-cosine form, with K = 11. (c) The spectral domain signal adaptive transfer functions with K = 17. (d) Approximations of transfer functions from panel (a) using Chebyshev polynomials, with H9(λ) being designated by the thick black lines, whereas gray markers indicate the corresponding discrete values.
Mathematics 09 01407 g009
Figure 10. Time–frequency representation of a three-component time-domain signal from Example 1, shown in Figure 3 (top), based on various transfer functions from Figure 5. The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 9a, (c) the wavelet-like spectral transfer functions in Figure 9b, (d) the signal adaptive transfer functions from Figure 9b, (e) Chebyshev polynomial-based approximations from Figure 9d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 9d, with M = 50 .
Figure 10. Time–frequency representation of a three-component time-domain signal from Example 1, shown in Figure 3 (top), based on various transfer functions from Figure 5. The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 9a, (c) the wavelet-like spectral transfer functions in Figure 9b, (d) the signal adaptive transfer functions from Figure 9b, (e) Chebyshev polynomial-based approximations from Figure 9d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 9d, with M = 50 .
Mathematics 09 01407 g010aMathematics 09 01407 g010b
Figure 11. Vertex–frequency representation of a three-component general graph signal from Example 1, shown in Figure 3 (bottom). The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 9a, (c) the wavelet-like spectral transfer functions in Figure 9b, (d) the signal adaptive transfer functions from Figure 9b, (e) Chebyshev polynomial-based approximations from Figure 9d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 9d, with M = 50 .
Figure 11. Vertex–frequency representation of a three-component general graph signal from Example 1, shown in Figure 3 (bottom). The LGFT is calculated based on: (a) the transfer functions in Figure 5, (b) the transfer functions in Figure 9a, (c) the wavelet-like spectral transfer functions in Figure 9b, (d) the signal adaptive transfer functions from Figure 9b, (e) Chebyshev polynomial-based approximations from Figure 9d, with M = 20 and (f) Chebyshev polynomial-based approximations from Figure 9d, with M = 50 .
Mathematics 09 01407 g011
Figure 12. Transfer functions formed using the Hann window (a,b) and the Bartlett window (c,d) square root so that the reconstruction condition k = 0 K 1 H k 2 ( λ p ) = 1 is satisfied, for a uniform splitting (a,c) of the spectral domain and a wavelet-like splitting (b,d).
Figure 12. Transfer functions formed using the Hann window (a,b) and the Bartlett window (c,d) square root so that the reconstruction condition k = 0 K 1 H k 2 ( λ p ) = 1 is satisfied, for a uniform splitting (a,c) of the spectral domain and a wavelet-like splitting (b,d).
Mathematics 09 01407 g012
Figure 13. Transfer functions formed using the Hann window (a,b) and the Bartlett window square root with modified argument (c,d), using the argument x mapping v x ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) . The WOLA reconstruction condition k = 0 K 1 H k 2 ( λ p ) = 1 is satisfied.
Figure 13. Transfer functions formed using the Hann window (a,b) and the Bartlett window square root with modified argument (c,d), using the argument x mapping v x ( x ) = x 4 ( 35 84 x + 70 x 2 20 x 3 ) . The WOLA reconstruction condition k = 0 K 1 H k 2 ( λ p ) = 1 is satisfied.
Mathematics 09 01407 g013
Figure 14. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 12 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 12 . (d) Least squares approximation of spectral transfer functions with M = 12 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Figure 14. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 12 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 12 . (d) Least squares approximation of spectral transfer functions with M = 12 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Mathematics 09 01407 g014
Figure 15. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 20 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 20 . (d) Least squares approximation of spectral transfer functions with M = 20 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Figure 15. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 20 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 20 . (d) Least squares approximation of spectral transfer functions with M = 20 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Mathematics 09 01407 g015
Figure 16. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 40 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 40 . (d) Least squares approximation of spectral transfer functions with M = 40 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Figure 16. Spectral bandpass transfer functions used for calculation of the LGFT and their polynomial approximations. (a) Spectral functions of the Hann form with K = 15 . (b) Approximation of spectral transfer functions based on Chebyshev polynomial with M = 40 . (c) Legendre-polynomial-based approximations of spectral transfer functions with M = 40 . (d) Least squares approximation of spectral transfer functions with M = 40 . For convenience, function H 8 ( λ ) is designated with a thick black line on each panel.
Mathematics 09 01407 g016aMathematics 09 01407 g016b
Figure 17. (a) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the polynomial approximations of transfer functions from Figure 14b. (b) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 14b. (c) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14c. (d) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14c. (e) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14d. (f) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14d. A complex signal is formed by adding two corresponding sine and cosine components.
Figure 17. (a) Time–frequency analysis of the harmonic signal from Example 1, shown in Figure 3 (top), using the polynomial approximations of transfer functions from Figure 14b. (b) Vertex–frequency analysis of the general graph signal from Example 1, shown in Figure 3 (bottom), using the transfer functions from Figure 14b. (c) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14c. (d) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14c. (e) Time–frequency analysis of the harmonic complex signal from Example 1, using the transfer functions from Figure 14d. (f) Vertex–frequency analysis of the general graph signal from Example 1, using the transfer functions from Figure 14d. A complex signal is formed by adding two corresponding sine and cosine components.
Mathematics 09 01407 g017aMathematics 09 01407 g017b
Table 1. Coefficients, h p , k , p = 0 , 1 , , M 1 , k = 0 , 1 , , K 1 , for the polynomial calculation of the LGFT, s k , of a signal x , in various spectral bands, k, for ( M 1 ) = 5 and K = 10 .
Table 1. Coefficients, h p , k , p = 0 , 1 , , M 1 , k = 0 , 1 , , K 1 , for the polynomial calculation of the LGFT, s k , of a signal x , in various spectral bands, k, for ( M 1 ) = 5 and K = 10 .
s k = ( h 0 , k I + h 1 , k L + h 2 , k L 2 + h 3 , k L 3 + h 4 , k L 4 + h 5 , k L 5 ) x
k h 0 , k h 1 , k h 2 , k h 3 , k h 4 , k h 5 , k
0 1.079 1.867 1.101 0.2885 0.03458 0.001548
1 0.053 1.983 1.798 0.5744 0.07722 0.003723
2 0.134 0.763 0.310 0.0222 0.00422 0.000460
3 0.050 0.608 0.900 0.3551 0.05348 0.002762
4 0.096 0.726 0.768 0.2475 0.03172 0.001424
5 0.016 0.013 0.128 0.1047 0.02231 0.001424
6 0.073 0.616 0.779 0.3228 0.05135 0.002762
7 0.051 0.351 0.356 0.1146 0.01323 0.000460
8 0.084 0.687 0.871 0.3751 0.06409 0.003723
9 0.021 0.183 0.251 0.1172 0.02196 0.001419
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stanković, L.; Lerga, J.; Mandic, D.; Brajović, M.; Richard, C.; Daković, M. From Time–Frequency to Vertex–Frequency and Back. Mathematics 2021, 9, 1407. https://doi.org/10.3390/math9121407

AMA Style

Stanković L, Lerga J, Mandic D, Brajović M, Richard C, Daković M. From Time–Frequency to Vertex–Frequency and Back. Mathematics. 2021; 9(12):1407. https://doi.org/10.3390/math9121407

Chicago/Turabian Style

Stanković, Ljubiša, Jonatan Lerga, Danilo Mandic, Miloš Brajović, Cédric Richard, and Miloš Daković. 2021. "From Time–Frequency to Vertex–Frequency and Back" Mathematics 9, no. 12: 1407. https://doi.org/10.3390/math9121407

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop