*Article* **Modeling Spectral Properties in Stationary Processes of Varying Dimensions with Applications to Brain Local Field Potential Signals**

#### **Raanju R. Sundararajan 1,\*, Ron Frostig <sup>2</sup> and Hernando Ombao <sup>3</sup>**


Received: 22 October 2020; Accepted: 3 December 2020; Published: 5 December 2020

**Abstract:** In some applications, it is important to compare the stochastic properties of two multivariate time series that have unequal dimensions. A new method is proposed to compare the spread of spectral information in two multivariate stationary processes with different dimensions. To measure discrepancies, a frequency specific spectral ratio (FS-ratio) statistic is proposed and its asymptotic properties are derived. The FS-ratio is blind to the dimension of the stationary process and captures the proportion of spectral power in various frequency bands. Here we develop a technique to automatically identify frequency bands that carry significant spectral power. We apply our method to track changes in the complexity of a 32-channel local field potential (LFP) signal from a rat following an experimentally induced stroke. At every epoch (a distinct time segment from the duration of the experiment), the nonstationary LFP signal is decomposed into stationary and nonstationary latent sources and the complexity is analyzed through these latent stationary sources and their dimensions that can change across epochs. The analysis indicates that spectral information in the Beta frequency band (12–30 Hertz) demonstrated the greatest change in structure and complexity due to the stroke.

**Keywords:** multivariate time series; nonstationary; spectral matrix; local field potential

**MSC:** 62M10; 62M15

#### **1. Introduction**

Numerous applications require comparing two multivariate time series of unequal dimensions. Neuroscience experiments result in a stationary or nonstationary multivariate signal from different epochs (distinct non-overlapping successive time segments of the duration of the experiment). A popular approach to modeling such data decomposes the observed signal at every epoch into useful latent sources that can be stationary or nonstationary. These latent sources are lower dimensional time series obtained by linear transforms of the components of the observed multivariate series and they aim to capture important statistical properties of the observed series. At these epochs, dimension reduction techniques such as principal component analysis (PCA), factor modeling, independent component analysis (ICA), stationary subspace analysis (SSA) are often applied to extract useful lower-dimensional latent sources. Artificially setting the dimension of these latent sources to be the same across the epochs results in loss of important information since these changes could be indicative of useful brain processes such as learning (Fiecas and Ombao [1]).

*Entropy* **2020**, *22*, 1375; doi:10.3390/e22121375 www.mdpi.com/journal/entropy

Indeed brain processes evolve across the entire recording period (Fiecas and Ombao [1], Ombao et al. [2]) leading to changes in the dimension of the latent sources across epochs. Moreover, the evolution of the dimension can itself serve as a feature in understanding how the brain function evolves during an experiment. As another example in neuroscience, the aim in functional connectivity is to model dependence between different brain regions at various epochs in an experiment; Cribben et al. [3], Cribben et al. [4], Cribben and Yu [5], Zhu and Cribben [6]. To mitigate the problem of high-dimensionality arising due to signal from densely voxelated cortical surface, parcellation leads to disjoint regions of interest (ROI) of the brain and signal summaries are obtained in each of these regions. Dependence measures between these ROIs are then computed using their respective signal summaries. In the above pursuit of region-wise comparison of the brain, it is natural to encounter the problem of comparing multivariate processes, say from two different regions that have unequal dimensions. In Wang et al. [7] the problem of modeling effective connectivity in high-dimensional cortical surface signal is pursued wherein a factor analysis is carried out on each ROI and vector autoregressive (VAR) models are used to jointly model the latent factors. Here again, one can potentially end up with unequal number of optimal latent factors from different ROIs thereby making the comparisons challenging.

The application that motivates our methodology is the analysis of local field potentials (LFP) in an experiment that simulates ischemic stroke in humans (Data source: Stroke experiment conducted in the lab of co-author (Ron Frostig) at his Neurobiology lab; http://frostiglab.bio.uci.edu/Home.html). The dataset comprises of 600 epochs worth of LFP recordings (each epoch is 1 s long) from 32 microelectrodes implanted in a rat's cortex. Figure 1 below depicts the rat's cortex and the locations of the 32 sensors implanted on the cortical surface from which the LFP signal is recorded. This 32-dimensional signal is our observed time series. A stroke is induced midway through the experiment (epoch 300) by clamping the medial cerebral artery. The goal is to develop a method that tracks changes in the complexity of signals following the stroke. From the observed LFP signal, useful lower-dimensional sources are extracted at each epoch and we shall characterize complexity in LFP through these useful latent sources and their varying dimensions across epochs.

**Figure 1.** (**A**) Visual representation of the 32 microelectrodes on the rat's cortex from which the local field potential (LFP) signal is recorded. (**B**) The distance between microelectrodes is 0.65 mm and the total distance between microelectrode 1 and microelectrode 8 is 3.9 mm.

Motivated by such applications, we propose a new method to compare spectral information in different multivariate stationary processes of varying dimensions. More specifically, the aim is to capture the amount of spectral information in various frequency bands in different stationary processes of unequal dimensions. There are already many methods and models that discuss evolution of spectral information but the key contribution of this paper is in modeling evolution of the spectrum while allowing dimension to also evolve over time. We introduce a frequency-specific spectral ratio, which we call the FS-ratio, statistic that measures the proportion of spectral power in various frequency bands. FS-ratio can be used to (i). identify frequency bands where there is significant discrepancies between pre and post stroke epochs, (ii). identify frequency bands that account for most variation within pre (and post) stroke epochs and (iii). identify the frequency bands that are consistent (vs inconsistent) across all the 600 epochs. One of the key features of this statistic is that it is blind to the dimension of the multivariate stationary process and can be used to compare successive epochs with possibly different dimensions in the stationary sources. Thus, the proposed FS-ratio is very useful in (a). discriminating between the pre and post stroke onset and (b). tracking changes over the entire course of the experiment while allowing for varying dimensions. In Section 2 we develop our FS-ratio statistic and derive its asymptotic properties. We return to the LFP dataset in Section 3 and discuss the usefulness of the proposed ratio statistic in discriminating between pre and post stroke onset. Section 4 concludes. Finally, we evaluate the performance of the proposed FS-ratio statistic through several simulation examples and the results are provided in Appendix A.

#### **2. Methodology**

In this section, we first describe our FS-ratio statistic and the method to analyze the evolution of spectral information in stationary processes with varying dimensions. Using the FS-Ratio statistic, a technique to locate the frequency bands carrying significant spectral power is discussed in Section 2.1.1. The theoretical properties of the proposed statistic along with the required assumptions are discussed in Section 2.1.2.

#### *2.1. The FS-Ratio Statistic*

Let *Xt* be a *d*1-variate time series and *Yt* be a *d*2-variate time series where *d*<sup>1</sup> = *d*<sup>2</sup> and *t* = 1, 2, ... , *T*. The spectral matrices of the two zero-mean multivariate stationary series are given by *fX*(*ω*) <sup>∈</sup> <sup>C</sup>*d*1×*d*<sup>1</sup> and *fY*(*ω*) <sup>∈</sup> <sup>C</sup>*d*2×*d*<sup>2</sup> for *<sup>ω</sup>* <sup>∈</sup> (−*π*, *<sup>π</sup>*). Here (−*π*, *<sup>π</sup>*) represents the normalized frequency range used to take care of aliases in the frequency components outside this range. This range (−*π*, *π*) is sometimes referred to as angular frequency scale with frequency 2*π* being called the Nyquist or folding frequency. With the discrete Fourier transforms of *Xt* and *Yt* expressed as *JX*,*T*(*ω*) = <sup>√</sup> <sup>1</sup> <sup>2</sup>*π<sup>T</sup> Xte*−*it<sup>ω</sup>* and *JY*,*T*(*ω*) = <sup>√</sup> <sup>1</sup> <sup>2</sup>*πTYte*−*itω*, respectively, the periodogram matrices *IX*,*T*(*ω*) <sup>∈</sup> <sup>C</sup>*d*1×*d*<sup>1</sup> and *IY*,*T*(*ω*) <sup>∈</sup> <sup>C</sup>*d*2×*d*<sup>2</sup> of the two series are obtained by

$$I\_{X,T}(\omega) = I\_{X,T}(\omega)I\_{X,T}(\omega)^\* \quad \text{and} \quad I\_{Y,T}(\omega) = I\_{Y,T}(\omega)I\_{Y,T}(\omega)^\*,\tag{1}$$

where *JX*,*T*(*ω*)<sup>∗</sup> denotes the conjugate transpose. The estimated spectral matrices, for *ω* ∈ (−*π*, *π*), are given by

$$\hat{f}\_X(\omega) = \frac{1}{T} \sum\_{j=-\lfloor \frac{T}{2} \rfloor + 1}^{\lfloor \frac{T}{2} \rfloor} K\_h(\omega - \omega\_j) \ I\_{X,T}(\omega\_j) \quad \text{and} \quad \hat{f}\_Y(\omega) = \frac{1}{T} \sum\_{j=-\lfloor \frac{T}{2} \rfloor + 1}^{\lfloor \frac{T}{2} \rfloor} K\_h(\omega - \omega\_j) \ I\_{Y,T}(\omega\_j), \tag{2}$$

where *ω<sup>j</sup>* = <sup>2</sup>*<sup>π</sup> <sup>T</sup> <sup>j</sup>* and *Kh*(·) = <sup>1</sup> *<sup>h</sup>K*( · *<sup>h</sup>* ) where *K*(·) is a nonnegative symmetric kernel function and *h* denotes the bandwidth. Assumptions on the kernel and bandwidth to ensure uniform consistency in *ω* ∈ (−*π*, *π*) of the estimated spectral matrices are listed in Section 2.1.2.

The aim of this work is to compare the two spectral matrices *fX*(*ω*) and *fY*(*ω*) over a specific frequency range (*a*, *b*) for some 0 < *a* < *b* < *π*. The challenge here, however, is that the dimensions of the processes *Xt* and *Yt* are unequal and hence their spectral matrices have varying dimensions. We thus focus on the spread or distribution of spectral power in each of these stationary processes across different frequency ranges. More precisely, for the *d*1-variate series *Xt* we define the frequency-specific spectral (FS-ratio) parameter as

$$R\_{X,a,b} = \frac{r\_{X,a,b}}{r\_{X,0,\pi}} = \frac{\int\_a^b ||\text{vec}(f\_X(\omega))||\_2^2 d\omega}{\int\_0^{\pi} ||\text{vec}(f\_X(\omega))||\_2^2 d\omega} \tag{3}$$

for some frequency band (*a*, *b*) ⊂ (0, *π*) where *vec*(·) denotes vectorization of a matrix into a single column vector and || · ||<sup>2</sup> <sup>2</sup> is the squared Euclidean norm. Observe that *RX*,*a*,*<sup>b</sup>* ∈ (0, 1) can be viewed as a measure that captures the proportion of spectral power found in the frequency range (*a*, *b*). Similarly using the spectral matrix *fY*(*ω*), *RY*,*a*,*<sup>b</sup>* ∈ (0, 1) can be defined for the *d*2-variate series *Yt*. Comparisons can now be made between the parameters *RX*,*a*,*<sup>b</sup>* and *RY*,*a*,*<sup>b</sup>* to understand the amount of spectral power in the frequency range (*a*, *b*) for the two multivariate series with unequal dimensions.

The data analogue of the FS-ratio parameter in (3) is then given by the FS-ratio statistic:

$$\hat{R}\_{X,a,b} = \frac{\mathfrak{f}\_{X,a,b}}{\hat{r}\_{X,0,\pi}} = \frac{\int\_a^b ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}{\int\_0^\pi ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega} \tag{4}$$

for some 0 < *a* < *b* < *π*. Similarly, the data analogue *RY*,*a*,*<sup>b</sup>* can be obtained for the *d*2-variate series *Yt*. The asymptotic properties of the quantities *r*ˆ*X*,*a*,*<sup>b</sup>* and *RX*,*a*,*<sup>b</sup>* are discussed in Section 2.1.2. In neuroscience applications such as the one in Section 3, pre-defined frequency bands such as Theta, Alpha, Beta and Gamma are often used to understand the distribution of spectral power across these frequency bands. As opposed to using pre specified frequency bands, in Section 2.1.1 below we provide a data-driven technique to locate the various frequency ranges (*a*, *b*) that carry significant spectral power.

#### 2.1.1. Finding Frequency Bands of Interest

In this section we describe our technique that uses the FS-Ratio statistic to find the frequency bands of interest. More precisely, we aim to locate the intervals (*a*, *b*) used in (3) and (4) wherein the multivariate time series has significant proportions of spectral power.

Let *Xt* be a *d*1-variate zero-mean second order stationary time series with its *d*<sup>1</sup> × *d*<sup>1</sup> spectral matrix given by *fX*(*ω*). With the FS-Ratio parameter defined in (3), we consider the scan parameter

$$\lambda\_{X,d} = 1 - \frac{R\_{X,0,d-\Delta}}{R\_{X,0,d}} = 1 - \frac{\int\_0^{d-\Delta} ||\text{vec}(f\_X(\omega))||\_2^2 d\omega}{\int\_0^d ||\text{vec}(f\_X(\omega))||\_2^2 d\omega} \tag{5}$$

for a small Δ > 0 and 0 < *a* < *π*. For the data analogue of the parameter above we consider a discretized sequence of frequency points 0 < *a*<sup>1</sup> < *a*<sup>2</sup> < ... < *aQ* < *π* and evaluate the scan statistic as

$$\widehat{\lambda}\_{X,a\_j} = 1 - \frac{\mathcal{R}\_{X,0,a\_j}}{\widehat{\mathcal{R}}\_{X,0,a\_{j+1}}} \tag{6}$$

for *j* = 1, 2, ... , *Q* − 1. A plot of the scan statistic *λX*,*aj* across the various frequency points *aj* will indicate the frequency ranges over which the spectral matrix of *Xt* has significant proportions of spectral power. Typically, one notices upward bumps in these plots over frequency ranges that carry significant spectral power; see Example 1 below and the top panel of Figure 2. Similarly for the *d*2-variate series *Yt* one can

define *λY*,*a*, find the estimated version *λY*,*aj* and obtain the plot of it across the various frequency points *aj*. Comparisons can then be made between the series *Xt* and *Yt* using these plots. The choice for Δ in (5) and number of points *Q* in (6) depends on the application under consideration. Certain applications demand attention to spectral power in very small frequency ranges and certain others might not. A multiscale approach can also be used where one obtains plots of the scan statistic *λX*,*aj* across frequency points *aj* for a sequence of Δ values. Visual inspection of these plots will help detect frequency ranges wherein the upward bumps are consistent across most values of Δ. If we let the interval (0, 0.5) correspond to the interval (0, *π*), our simulation study and real data analysis indicate a choice of Δ = 0.01 and *Q* = 49 as reasonable.

**Figure 2. Example 1** (**Top**) Plot of average scan statistic *λX*,*aj* for epochs *i* < 300 (**left**) and *i* ≥ 300 (**right**) at a discretized sequence of frequency points 0 < *a*<sup>1</sup> = 0.01 < *a*<sup>2</sup> = 0.02 < ... < *a*<sup>49</sup> = 0.49 < 0.5 (Δ = 0.01). Here (0, 0.5) corresponds to the interval (0, *π*). (**Bottom**) Plot of the average of the statistic *RX*,0,*aj* at the same frequency points.

We next provide a simple illustration of the scan statistic *λX*,*<sup>a</sup>* through the following simulation example and show how it is useful in detecting important frequency ranges. The simulation scheme in this illustration is designed to mimic the real data situation in Section 3. There the entire duration of the neuroscience experiment is divided into non-overlapping successive time segments (a total of *N* epochs). The multivariate stationary processes of interest in these *N* epochs tend to have different dimensions and we attempt to mimic that scenario.

**Example 1.** *We consider N stationary processes, X*1,*t*, *X*2,*t*,..., *XN*,*t, with the series Xi*,*<sup>t</sup> given by*

$$X\_{i,t} = \begin{cases} \ V\_{i,t}^{(1)} & \text{if } i < \frac{N}{2} \\\ V\_{i,t}^{(2)} & \text{if } i \ge \frac{N}{2} \end{cases} \tag{7}$$

*where <sup>i</sup>* = 1, 2, ... , *<sup>N</sup>* = 600 *epochs, <sup>t</sup>* = 1, 2, ... , *<sup>T</sup>* = 1000*. Here <sup>V</sup>*(1) *<sup>i</sup>*,*<sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>3</sup> *and its components are given by <sup>v</sup>*0,*t*+*k*−<sup>1</sup> + *<sup>v</sup>*1,*t*+*k*−<sup>1</sup> *for <sup>k</sup>* = 1, 2, 3 *and <sup>v</sup>*0,*<sup>t</sup> follows a AR*(2) *with* (−0.8, −0.7) *and <sup>v</sup>*1,*<sup>t</sup> follows a AR*(2) *with*

(0.25, <sup>−</sup>0.75)*. The components of <sup>V</sup>*(2) *<sup>i</sup>*,*<sup>t</sup>* <sup>∈</sup> <sup>R</sup><sup>2</sup> *are given by <sup>v</sup>*2,*t*+*k*−<sup>1</sup> *for <sup>k</sup>* <sup>=</sup> 1, 2 *and <sup>v</sup>*2,*<sup>t</sup> follows a AR*(2) *with* (1.25, −0.75)*.*

We consider a discretized set of frequency points {*a*1, *a*2, ... , *aQ*} of the interval (0, *π*). At each point *aj* we evaluate the average of the scan statistic *λX*,*aj* over epochs 1-299 and the average of the scan statistic over epochs 300–600. More precisely at each frequency point *aj* and at each epoch, we obtain *λX*,*aj* and compute averages of these quantities over the respective epochs. In the top panel of Figure 2 we plot this average scan statistic. For epochs 1–299, *V*(1) *<sup>i</sup>*,*<sup>t</sup>* from (7) is a combination of two AR(2) processes with spectral density peaks at roughly 0.22 and 0.33. The top left plot in Figure 2 witnesses the scan statistic exhibiting bumps around those frequencies. Similarly for epochs 300–600, *V*(2) *<sup>i</sup>*,*<sup>t</sup>* from (7) is generated from an AR(2) process with peak at roughly 0.12. The top right plot in Figure 2 witnesses the scan statistic exhibiting a bump around that frequency.

In the bottom panel of Figure 2 we plot averages of the statistic *RX*,0,*aj* for *j* = 1, 2, ... , *Q* − 1. We observe that this statistic is not as capable as the scan statistic *λX*,*aj* in bringing out the frequency ranges of significant spectral proportions.

#### 2.1.2. Theoretical Properties of the FS-Ratio Statistic

In this section we list the required assumptions and discuss the asymptotic properties of the statistics *r*ˆ*X*,*a*,*<sup>b</sup>* and FS-ratio *RX*,*a*,*b*.

**Assumption 1.** *Let Zt* = (*Xt*,*Yt*) , *<sup>t</sup>* <sup>∈</sup> <sup>Z</sup> *be a* (*d*<sup>1</sup> <sup>+</sup> *<sup>d</sup>*2)*-variate zero-mean second-order stationary time series. For any k* > 0*, the kth order cumulants of Zt satisfy*

$$\sum\_{\mathbf{u}\_1,\mathbf{u}\_2,\dots,\mathbf{u}\_{k-1}\in\mathbb{Z}} \left[ \|\mathbf{1} + |\boldsymbol{\mu}\_j|^2 \right] \left[ \mathcal{L}\_{\mathbf{b}\_1,\mathbf{b}\_2,\dots,\mathbf{b}\_k} (\boldsymbol{\mu}\_1,\boldsymbol{\mu}\_2,\dots,\boldsymbol{\mu}\_{k-1}) \right] < \infty$$

*for j* = 1, 2, ..., *<sup>k</sup>* − <sup>1</sup> *and b*1, *<sup>b</sup>*2, ..., *bk* = 1, 2, ..., *<sup>d</sup>* = *<sup>d</sup>*<sup>1</sup> + *<sup>d</sup>*<sup>2</sup> *where cb*1,*b*2,...,*bk* (*u*1, *<sup>u</sup>*2, ..., *uk*−1) *is the kth order*

*joint cumulant of Zb*1,*u*<sup>1</sup> , ..., *Zbk*−1,*uk*−<sup>1</sup> , *Zbk*,0 *as defined in Brillinger [8].*

Please note that the *<sup>k</sup>*th order cumulant is given by *cb*1,*b*2,...,*bk* (*u*1, *<sup>u</sup>*2, ..., *uk*−1) = *cum*{*Zb*1,*u*<sup>1</sup> , ..., *Zbk*−1,*uk*−<sup>1</sup> , *Zbk*,0} where *Zbr*,*us* refers to component *br* of the vector *Zus* with *us* being the time point; see Theorem 2.3.2 of Brillinger [8]. For example when *k* = 2, the 2nd order cumulant *cum*{*Zb*1,*u*<sup>1</sup> , *Zb*2,*u*<sup>2</sup> } = *cov*(*Zb*1,*u*<sup>1</sup> , *Zb*2,*u*<sup>2</sup> ) is the covariance between those two random variables.

**Assumption 2.** *(a). The kernel function K*(·) *is bounded, symmetric, nonnegative and Lipschitz-continuous with compact support* [−*π*, *π*] *and π*

$$\int\_{-\pi}^{\pi} K(\omega)d\omega = 1.$$

*where K*(*ω*) *has a continuous Fourier transform k*(*u*) *such that*

$$\int k^2(\mu)d\mu < \infty \quad \text{and} \quad \int k^4(\mu)d\mu < \infty.$$

*(b). The bandwidth h is such that h*9/2*<sup>T</sup>* <sup>→</sup> <sup>0</sup> *and h*2*<sup>T</sup>* <sup>→</sup> <sup>∞</sup> *as T* <sup>→</sup> <sup>∞</sup>*.*

**Remark 1.** *Assumptions 1 and 2 above are the same as in Eichler [9] where the first requires existence of all order moments of Yt and the second ensures consistency of the estimated spectral matrix. It must be noted that*

*the assumptions on the kernel and bandwidth are primarily for establishing asymptotic result in* (13) *and can be weakened for Theorem 1.*

**Theorem 1.** *Suppose that Assumptions 1,2 are satisfied. Then as T* → ∞*,*

$$f(a). \tag{8}$$

$$f(a). \tag{9}$$

$$f(a). \tag{10}$$

$$f(a) = \sum\_{r,s=1}^{P} \int\_{a}^{b} \sum\_{r,s=1}^{d\_1} f\_{X,rs}(\omega) \overline{f\_{X,rs}(\omega)} \, d\omega \, \tag{8}$$

*where fX*(*ω*) = *fX*,*rs r*,*s*=1,2,...,*d*<sup>1</sup> *is the <sup>d</sup>*<sup>1</sup> <sup>×</sup> *<sup>d</sup>*<sup>1</sup> *spectral matrix of Xt and <sup>P</sup>* −→ *denotes convergence in probability. Furthermore, let* <sup>Π</sup>(*a*,*b*) = (0, *<sup>π</sup>*) \ (*a*, *<sup>b</sup>*) *for some* <sup>0</sup> <sup>&</sup>lt; *<sup>a</sup>* <sup>&</sup>lt; *<sup>b</sup>* <sup>&</sup>lt; *<sup>π</sup>. If rX*,*a*,*<sup>b</sup>* <sup>&</sup>gt; <sup>0</sup> *and rX*,Π(*a*,*b*) > 0*,*

$$
\hat{R}\_{X,a,b} \stackrel{P}{\rightarrow} \left(1 + \frac{r\_{X,\text{IT}\_{(a,b)}}}{r\_{X,a,b}}\right)^{-1} \tag{9}
$$

*where rX*,Π(*a*,*b*) <sup>=</sup> <sup>Π</sup>(*a*,*b*) *fX*(*ω*)2*dω.*

**Proof.** See Appendix B for details of the proof.

Please note that in finite sample situations explored using simulation examples in Appendix A and the real data application in Section 3, we use the block bootstrap technique of Politis and Romano [10] for resampling from a stationary process. This is done to obtain sample quantiles of the FS-ratio statistic *RX*,*a*,*b*.

**Remark 2.** *In a special case wherein the dimensions of the two processes are the same (d*<sup>1</sup> = *d*2*), we wish to test for the equality of spectral matrices of same dimensions over an interval* 0 < *a* < *b* < *π. Let us assume d*<sup>1</sup> = *d*<sup>2</sup> *and d* = *d*<sup>1</sup> + *d*<sup>2</sup> *and define the d* × *d spectral matrix of Zt* = (*Xt*,*Yt*) *as*

$$f\_{\mathbf{Z}}(\omega) = \begin{bmatrix} f\_{\mathbf{Z},11}(\omega) & f\_{\mathbf{Z},12}(\omega) \\ f\_{\mathbf{Z},21}(\omega) & f\_{\mathbf{Z},22}(\omega) \end{bmatrix} \tag{10}$$

*where the d*<sup>1</sup> × *d*<sup>1</sup> *matrix fZ*,12(*ω*) *is the cross-spectral matrix of the processes Xt and Yt and fZ*,11(*ω*) *and fZ*,22(*ω*) *are the spectral matrices of Xt and Yt respectively. We consider testing*

$$H0 \;:\; f\chi(\omega) = f\_Y(\omega) \;\forall\;\omega \in (a,b) \tag{11}$$

*where* 0 < *a* < *b* < *π. The test statistic is*

$$
\hat{D}\_{X,Y} = \int\_a^b ||\text{vec}(\hat{f}\chi(\omega) - \hat{f}\_Y(\omega))||\_2^2 d\omega. \tag{12}
$$

*The L*<sup>2</sup> *norm above in* (12) *on the spectral matrices is similar to the statistics considered in Eichler [9] and Dette and Paparoditis [11] wherein the problem of testing equality of spectral matrices is discussed. Suppose that Assumptions 1,2 are satisfied, an application of Theorem 3.5 of Eichler [9] yields, under H*0*,*

$$2\pi T\sqrt{h}\,\mathcal{D}\_{X,Y} - \frac{\mu\_{XY}}{\sqrt{h}} \xrightarrow{D} N(0, \sigma\_{XY}^2) \tag{13}$$

*where*

$$\mu\_{XY} = A\_K \int\_{-\pi}^{\pi} \mathbf{1}\_{\omega \in \{a, b\}} \left( \sum\_{p\_1, p\_2 = 1}^{2} \left( -1 + 2\delta\_{p\_1 p\_2} \right) \text{tr} (f\_{Z, p\_1 p\_2} (\omega))^2 \right) d\omega \tag{14}$$

*and*

$$\sigma\_{XY}^2 = B\_X \int\_{-\pi}^{\pi} \mathbf{1}\_{\omega \in (a,b)} \left( \sum\_{p\_1, p\_2, p\_3, p\_4 = 1}^2 (-1 + 2\delta\_{p\_1 p\_2}) \left( -1 + 2\delta\_{p\_3 p\_4} \right) \text{tr} \left( f\_{Z, p\_1 p\_3}^{[j]} (\omega) \overline{(f\_{Z, p\_2 p\_4}^{[j]}(\omega))^T} \right)^2 \right) d\omega. \tag{15}$$

*where <sup>D</sup>* −→ *denotes convergence in distribution,*

$$A\_K = \int\_{-\pi}^{\pi} \mathcal{K}^2(v) dv,\ \mathcal{B}\_K = 4 \int\_{a-\pi}^{b+\pi} \left( \int\_{-\pi}^{\pi} \mathcal{K}(u) \mathcal{K}(u+v) du \right)^2 dv$$

*and δrs* = *I*(*r* = *s*) *is the Kronecker delta and tr(*·*) denotes the trace of a matrix.*

**Remark 3.** *In the neuroscience application in Section 3, the entire duration of the experiment is divided into non-overlapping successive time segments (a total of N epochs). Each epoch results in a multivariate stationary process of interest with the dimensions of these processes varying across epochs. Letting X*1,*t*, *X*2,*t*, ... , *XN*,*<sup>t</sup> be the N stationary processes at these epochs, one can obtain the FS-ratio statistics RXi*,*a*,*b, for i* = 1, 2, ... , *N, and view this is a series with time index being the epoch index i. Applying change point detection to this series to formally test for the significance of change points would require use of a divergence measure that measures distance between RXi*,*a*,*<sup>b</sup> and RXj*,*a*,*<sup>b</sup> when i* = *j. Different norms can be used to construct this divergence measure and this would serve as the test statistic. Large sample distributions of this statistic would provide critical values necessary for the test. One of the issues here would be in dealing with differing errors in estimating the FS-ratio statistics when the dimensions of the two series are very different and this needs further investigation.*

#### **3. Analysis of Complexity of Rat Local Field Potentials in a Stroke Experiment**

In this section, we investigate the ability of the FS-ratio statistic to identify changes in the spectral properties of the local field potential (LFP) of a rat (Local field potential data on the experimental rat comes from the stroke experiment conducted at Frostig laboratory at University of California Irvine: http://frostiglab.bio.uci.edu/Home.html). The aim is to identify changes in complexity and structure of the multivariate cortex signal over the course of the experiment. It is also of interest to understand the differential roles of frequency bands and determine the specific bands that demonstrate the most significant changes that occurred due to the stroke.

At 32 locations on the rat's cortex, microelectrodes are inserted: 4 layers in the cortex, at 300 μm, 700 μm, 1100 μm and 1500 μm and 8 microelectodes lined up in each of the 4 layers. We look at the field potential specific to the 32 locations recorded for a total duration of 10 min. This 10 min duration is divided into *N* = 600 epochs (distinct successive non-overlapping time segments of the duration of the experiment) with each epoch comprising of 1 s worth of data. The sampling rate here is 1000 Hz resulting in *T* = 1000 observations per epoch. Midway through the recording period (after epoch 300) a stroke is artificially induced by clamping the medial cerebral artery that supplied blood to the recorded area.

As a first step in our analysis, we applied a component-wise univariate test of second-order stationarity (Dwivedi and Subba Rao [12]) of the LFP signal at each epoch. In Figure 3, we present the p-values from a test of second-order stationarity carried out on each of the *p* = 32 microelectrodes at each epoch. We notice that these individual microelectrodes are more stationary after the stroke than before.

Next, we model the observed 32-dimensional signal as a multivariate nonstationary time series using the stationary subspace analysis (SSA) setup. We assume the observed *p* = 32 dimensional LFP signal *Si*,*<sup>t</sup>* is linearly generated by stationary and nonstationary sources in the cortex. More precisely we have,

$$S\_{i,t} = A\_i X\_{i,t} + \varepsilon\_{i,t}, \ i = 1,2,\ldots,N = 600,\tag{16}$$

where *Xi*,*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*di* is latent stationary source, *Ai* is a *<sup>p</sup>* <sup>×</sup> *di* unknown demixing matrix, *<sup>ε</sup>i*,*<sup>t</sup>* are the nonstationary sources. This setup of starting with an observed nonstationary time series and, after some transformation,

getting to a lower dimensional stationary time series has interesting applications in neuroscience. For instance, EEG signals measuring brain activity appear often as a multivariate nonstationary time series; see Ombao et al. [13], Srinivasan [14], Nunez and Srinivasan [15], von Bünau et al. [16], Wu et al. [17], Gao et al. [18], Euán et al. [19] for examples. Kaplan et al. [20] regard the nonstationarity as background activity in the brain signal and removing this nonstationarity was seen to improve prediction accuracy in neuroscience experiments; von Bünau et al. [21] and von Bünau et al. [16]. Thus, the aim of SSA is to separate the stationary from the nonstationary sources within each epoch and focus attention on the stationary sources. From a stroke neuroscientist's perspective, the stationary sources within a short epoch of 1 s are considered to be the "stable" components of the signal since they are consistent within that short interval. The word consistent here refers to the statistical properties of the signal remaining the same within an epoch. Of course the transient components (nonstationary components) may also be of interest in other applications.

**Figure 3.** *p*-values from the test of second-order stationarity on each of the *p* = 32 LFP microelectrodes (y-axis) for all 600 epochs (x-axis).

The next goal in the data analysis is to estimate the epoch-evolving dimension *di* and the latent stationary time series *Xi*,*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*di* where *di* <sup>&</sup>lt; *<sup>p</sup>*. In Figure 4, we apply SSA and plot the estimates of the stationary subspace dimension *di* across *N* = 600 epochs using the method in Sundararajan et al. [22].

The evolutionary dimension *di* of the latent stationary sources were presented in Figure 4. The plot indicates increase in the number of stationary sources in post-stroke epochs (after epoch 300) and this agrees with the results in Figure 3 wherein more epochs after the stroke witness stationary behavior in the individual LFP components. It is indeed interesting that immediately post-occlusion (or immediately after stroke onset), the LFPs are highly synchronized: the plots of the observed LFP *Si*,*<sup>t</sup>* and the estimated squared coherence between the 32 components (Figure 5) suggest that different electrodes look very similar and there is high coherence in between the entire network of electrodes at various frequency bands. Please note that for the observed 32 dimensional signal *Si*,*<sup>t</sup>* in epoch *i*, the squared coherence between two components *Sp*,*i*,*<sup>t</sup>* and *Sq*,*i*,*t*, for *p* = *q*, at frequency *ω* is given by

$$\mathbb{C}\_{p,q}(\omega) = \frac{|f\_{\mathbb{S},pq}(\omega)|^2}{f\_{\mathbb{S},pp}(\omega)f\_{\mathbb{S},qq}(\omega)}\tag{17}$$

where *fS*,*pq*(*ω*) denotes the cross-spectrum between those two components and *fS*,*pp*(*ω*) and *fS*,*qq*(*ω*) are the univariate spectra of the components series *Sp*,*i*,*<sup>t</sup>* and *Sq*,*i*,*<sup>t</sup>* respectively. This observation of high coherence across electrodes immediately post-occlusion was confirmed by the neuroscientists and also reported in Ellen Wann's PhD dissertation (Wann [23]). Next, we investigate further into the lead-lag cross-dependence between microelectrodes. We pre-whitened the observed time series to make the lag-0 covariance matrix identity. More precisely, one considers Σ−1/2 *<sup>i</sup> Si*,*<sup>t</sup>* where <sup>Σ</sup>−1/2 *<sup>i</sup>* is the inverse square root of the lag-0 covariance matrix *V*(*Si*,*t*). We observe, in Figure 5, the significant drop in the magnitude of squared coherence after pre-whitening indicating that the dependence among the 32 components is predominantly due to a contemporaneous (i.e., lag-0) dependence. One can also notice, from the right plot in Figure 5, a drop in the coherence in the gamma frequency band after the stroke.

**Figure 4.** Plot of estimated stationary subspace dimensions *d <sup>i</sup>* for the *i* = 1, 2, ... , *N* = 600 epochs in the stroke experiment. Please note that for each epoch *i* there is a single estimated dimension *d <sup>i</sup>* that is plotted.

We then estimated the latent stationary sources *Xi*,*<sup>t</sup>* for the *i* = 1, 2, ... , *N* = 600 epochs using the DSSA method in Sundararajan and Pourahmadi [24]. In order to overcome identifiability issues in the model in (16), SSA and PCA methods for time series assume an identity lag-0 covariance matrix for *Xi*,*<sup>t</sup>* and resort to a pre-whitening technique to achieve this. Figure 6 plots the average squared coherence in the non pre-whitened and pre-whitened stationary sources across different frequency bands. Similar to the coherence pattern in the observed LFP in Figure 5, the left plot in Figure 6 witnesses an increase in the coherence after the occurrence of the stroke. This indicates the importance of the stationary components in explaining the high degree of synchronicity. Also, the right plot in Figure 6 indicates a substantial drop in the magnitude of coherence in the stationary sources. The pre-whitened stationary sources have lower coherence than the coherence of the stationary sources based on the non pre-whitened. As noted, previous findings have already indicated an increased coherence post stroke onset. Our analysis provided an additional insight that the increase in the coherence post-stroke is due only to contemporaneous (or lag-0) dependence. This indicates perfect temporal synchrony in a sense that there is no lead-lag cross-dependence between the electrodes. This was suggested by visual inspection of the LFP traces and hypothesized by neuroscientists though never formally confirmed until now with our analysis.

*Entropy* **2020**, *22*, 1375

**Figure 5.** (**Left**) average squared coherence among the 32 components of the observed LFP signal across 600 epochs. The averages are computed across the specified frequency bands. (**Right**) average squared coherence among the 32 components of the pre-whitened LFP signal across 600 epochs.

**Figure 6.** (**Left**) average squared coherence in the estimated stationary sources across 600 epochs. The averages are computed across the specified frequency bands. (**Right**) average squared coherence in the pre-whitened stationary sources across 600 epochs.

Next, the FS-ratio statistic was evaluated on these estimated stationary sources at each of the 600 epochs at various frequency bands. Figure 7 plots the estimated FS-ratio statistic *RXi*,*a*,*b*, *i* = 1, 2, ... , *N* = 600, for the known frequency bands: theta (4–8 Hertz), alpha (8–12 Hertz), beta (12–30 Hertz) and gamma (30–50 Hertz). At each epoch *i*, we obtained a 95% confidence interval for the FS-ratio statistic using the block bootstrap technique of Politis and Romano [10]. To select the block length, we follow the procedure in Politis and White [25], Patton et al. [26]. Please note that this procedure is for the univariate case and hence we apply it to each component of the multivariate process *Xi*,*<sup>t</sup>* and

obtain the block length as the average over all components. The confidence intervals are the blue shaded regions in Figures 7 and 8.

**Figure 7.** Plot of the FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for *i* = 1, 2, ... , *N* = 600 for various frequency bands. The blue shaded region corresponds to a 95% confidence interval.

**Figure 8.** Plot of the FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for *i* = 1, 2, ... , *N* = 600 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*). The blue shaded region corresponds to a 95% confidence interval.

The FS-ratio statistic is seen to have differences in the pre and post stroke epochs in the Theta, Alpha, and Beta bands but not in the Gamma band. It can also be seen that the biggest difference in FS-ratio between pre and post stroke is in the Beta band wherein there is a decrease in the amount of spectral information after the stroke. Figure 8 also presents the FS-ratio statistic on other specified frequency bands wherein one notices differences between the pre and post stroke epochs.

Tables 1 and 2 contain numerical summaries of the FS-ratio statistic for the pre and post stroke epochs at various frequency bands. We notice that the Beta band is where there is maximum difference observed between the pre and post stroke epochs. The Gamma band is consistent throughout the experiment's 600 epochs. Within the pre stroke epochs (and also within the post-stroke epochs), the most variation in FS-ratio is observed in the Beta band.


**Table 1.** Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for **pre stroke** epochs *i* = 1, 2, . . . , 300.

**Table 2.** Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for **post stroke** epochs *i* = 301, 302, . . . , 600.


#### *Discussion*

The *p*-values presented in Figure 3 represent a test of second-order stationarity carried out on each of the *p* = 32 microelectrodes at each epoch. We noticed that immediately after stroke the individual microelectrodes behaved in a more stationary manner and this was visibly different from what was observed before the stroke. Based on this analysis, it might be plausible that the LFP signal, under normal circumstances, exhibits nonstationary behavior and immediately post stroke the signal behaves in a more stationary manner thereby showing that the brain's typical functions are affected. The plots of the observed LFP *Si*,*<sup>t</sup>* and the estimated squared coherence between the 32 components (Figure 5) indicate high cross-electrode coherence at various frequency bands immediately post stroke. This observation was also confirmed by the neuroscientists and also reported in Ellen Wann's PhD dissertation (Wann [23]).

In Fontaine et al. [27], a univariate LFP microelectrode-wise change point analysis was performed on the same dataset. In their work, for various frequency bands, changes in the non-linear spectral dependence of the LFP signal is modeled using parametric copulas. They detected change-points for a fixed microelectrode and fixed frequency band. One can notice the detection of numerous change points in the Delta, Theta, Alpha, Beta and Gamma bands for individual microelectrodes 1, 9 and 17. The detected change points include several epochs with very few of them being close to the time of the occlusion (or induced stroke) which was epoch *i* = 300.

In contrast, the advantages of our method are as follows: (i). The method treats the observed LFP signal as a multivariate nonstationary time series. Using (16), we model this observed multivariate signal as a mixture of stationary and nonstationary components. Figure 4 presents the dimension of stationary subspace (dimension of *Xi*,*t*) across the 600 epochs and this is seen to be a useful feature in understanding changes in the cortical signal after the occurrence of the induced stroke (epoch 300). In other words, an increase in the dimension *di* after the stroke points to a more stationary behavior of the LFP signal after the stroke. (ii). The FS-ratio statistic, with the ability to compare two multivariate processes with unequal

dimensions, is applied on the estimated processes *Xi*,*<sup>t</sup>* for each of the 600 epochs and frequency band specific numerical summaries are presented. The Beta frequency band is seen to be display the greatest changes within the pre stroke and post stroke epochs and also between the pre stroke and post stroke epochs. Also, from Figures 7 and 8, it is very easy to spot a change point at epoch 300 when the stroke was induced.

#### **4. Concluding Remarks**

In this work, we proposed a new frequency-specific spectral ratio statistic FS-ratio that is demonstrated to be useful in comparing spectral information in two multivariate stationary processes of different dimensions. The method is motivated by applications in neuroscience wherein brain signal is recorded across several epochs and the widely used tactic is to assume the observed signal be linearly generated by latent sources of interest in lower dimensions. Applying PCA/ICA/SSA and other dimension reduction methods to the observed signal in different epochs in the experiment results in different estimates of the dimensions of latent sources. In these situations, the FS-ratio is seen to be useful because (i). It captures the proportion of spectral power in various frequency bands by means of a *L*2-norm on the spectral matrices and (ii). It is blind to the dimension of the stationary process as it only looks at the proportion of spectral power at frequency bands. Under mild assumptions, the asymptotic properties of FS-ratio statistic are derived. We also provide a data-driven technique to locate the frequency bands that carry significant proportions of spectral power. In the application of our method to the LFP dataset, we witness the ability of our method in (i). identifying frequency bands where the pre and post stroke epochs are different, (ii). identifying frequency bands that accounts for most discrepancies within pre (and post) stroke epochs, (iii). identifying the frequency bands that are consistent across all the 600 epochs of the experiment and (iv). understanding the importance of contemporaneous dependence, both in the observed LFP and the stationary sources, across the 600 epochs and this indicated perfect synchrony (no lead-lag cross-dependence) among microelectrodes immediately after the stroke.

Topological data analysis (TDA) methods for characterizing complexity and detecting phase transitions exist in the literature; M. Piangerelli [28], Rucco et al. [29], Wang et al. [30]. Topological features from the observed series are extracted using techniques such as persistent entropy, persistence diagrams and Betti numbers and this can be viewed as another approach to identify changes in the multivariate time series due to events such as epilepsy and seizure.

**Author Contributions:** The stroke experiment was conducted at the R.F.'s laboratory (Frostig laboratory at University of California Irvine: http://frostiglab.bio.uci.edu/Home.html) and the LFP data came from this experiment. The statistical methodology, theory and computations were performed by R.R.S. and H.O. All authors have read and agreed to the published version of the manuscript.

**Funding:** This work is support in part by KAUST, NIH NS066001, Leducq Foundation 15CVD02 and NIH MH115697. **Conflicts of Interest:** The authors declare no conflict of interest.

#### **Appendix A. Simulation Study**

In this section, we illustrate the performance of the FS-ratio statistic in capturing spread of spectral information using simulated examples. We consider four simulation schemes and report the key summaries of the FS-ratio statistic across repetitions of each of the four schemes. In addition, 95% bootstrap confidence limits for the FS-ratio statistic are computed from *B* = 500 bootstrap replications. Here we use the block bootstrap procedure of Politis and White [25], Patton et al. [26]. For an estimate of the spectral matrix defined in (2), the Bartlett-Priestley kernel with bandwidth *h* = *T*−0.3 and the Daniell kernel, see Example 10.4.1 in Brockwell and Davis [31] with *<sup>m</sup>* <sup>=</sup> <sup>√</sup>*<sup>T</sup>* were implemented. Similar results were obtained for the two kernel choices and only the results from the latter are presented.

The simulation schemes presented below are designed to mimic the real data situation in Section 3. There, the entire duration of the neuroscience experiment is divided into non-overlapping successive time segments (*N* epochs). Then from these *N* epochs, lower dimensional stationary sources of varying dimensions were extracted. Similarly, in our simulations below we shall simulate *N* stationary processes with varying dimensions and investigate the evolution of the FS-ratio statistic. We now present 4 simulation schemes to assess the behavior of our FS-ratio statistic. Scheme 1 simulates *N* multivariate stationary VAR(2) processes, *Xi*,*<sup>t</sup>* where *i* = 1, 2, ... , *N* = 500, with dimensions randomly chosen from {2, 3, ... , 30} and the error vectors are component-wise i.i.d *N*(0, 1). The phase parameter *θ* of the AR components are allowed to vary across epochs i.e two different choices, *θ*<sup>1</sup> and *θ*2, are included with *θ*<sup>1</sup> = <sup>4</sup>*<sup>π</sup>* <sup>25</sup> for epochs *i* < *N*/2 and *θ*<sup>2</sup> = <sup>4</sup>*<sup>π</sup>* <sup>5</sup> for epochs *i* ≥ *N*/2. This causes a shift in the frequency bands of interest as we move across the epochs. Scheme 2 is similar to Scheme 1 but an interaction between the dimension and frequency is included. More precisely, lower dimensional signals simulated in epochs *i* < *N*/2 have a peak at frequency *θ*<sup>1</sup> = <sup>4</sup>*<sup>π</sup>* <sup>25</sup> , and the higher dimensional signals simulated in epochs *i* ≥ *N*/2 have a peak at frequency *θ*<sup>2</sup> = <sup>4</sup>*<sup>π</sup>* <sup>5</sup> . Scheme 3 is also similar to Scheme 1 but the error vectors are allowed to have contemporaneous dependence. The *N* multivariate processes in Scheme 4 are first simulated as in Scheme 1 but are then pre-multiplied with a half-orthogonal matrix. This is in line with the model assumption (16) made in Section 3.

**Scheme 1**: We simulate *N* stationary process *X*1,*t*, *X*2,*t*, ... , *XN*,*t*, for *t* = 1, 2, ... , *T*, where the *i*th process includes a *di*-variate process *Xi*,*<sup>t</sup>* = (*X*1,*i*,*t*, *X*2,*i*,*t*, ... , *Xdi*,*i*,*t*) where each *Xk*,*i*,*t*, *k* = 1, 2, ... , *di*, are independently generated univariate stationary AR(2) process given by

$$X\_{k,i,t} = \phi\_{i,1} X\_{k,i,t-1} + \phi\_{i,2} X\_{k,i,t-2} + \epsilon\_{k,i,t}$$

*<sup>φ</sup>i*,1 <sup>=</sup> <sup>2</sup>*ξ<sup>i</sup>* cos(*θi*), *<sup>φ</sup>i*,2 <sup>=</sup> <sup>−</sup>*ξ*<sup>2</sup> *<sup>i</sup>* , *k*,*i*,*<sup>t</sup>* are i.i.d *N*(0, 1) and *k* = 1, 2, ... , *di*, *i* = 1, 2, ... , *N* = 500, *t* = 1, 2, ... , *T* = 1000. The dimension *di* for *Xi*,*<sup>t</sup>* is randomly chosen from {2, 3, ... , 30}. Here *ξ<sup>i</sup>* ∼ *U*(0.8, 0.98) and *θ<sup>i</sup>* is given by

$$\theta\_i = \begin{cases} \cos(\frac{4\pi}{\frac{25}{\sqrt{5}}}) & \text{if } i < \frac{N}{2} \\\cos(\frac{4\pi}{\frac{5}{5}}) & \text{if } i \ge \frac{N}{2} \end{cases}$$

**Scheme 2**: We follow Scheme 1 in generating *N* process *X*1,*t*, *X*2,*t*, ... , *XN*,*t*, for *t* = 1, 2, ... , *T* = 1000 and *i* = 1, 2, . . . , *N* = 500. Unlike Scheme 1, the dimension *di* for *Xi*,*<sup>t</sup>* is chosen such that

$$d\_{\vec{i}} = \begin{cases} \ d\_{1,\vec{i}} & \text{if } \vec{i} < \frac{N}{2} \\\ d\_{2,\vec{i}} & \text{if } \vec{i} \ge \frac{N}{2} \end{cases}$$

where *d*1,*<sup>i</sup>* is simulated from discrete uniform distribution over {1, 2, ... , 14} and *d*2,*<sup>i</sup>* is simulated from discrete uniform distribution over {15, 16, ... , 30}. Observe that in Scheme 2 there is an interaction between the dimension and frequency. The lower dimensional signals simulated in epochs *i* < *N*/2 has a peak at frequency <sup>4</sup>*<sup>π</sup>* <sup>25</sup> , and the higher dimensional signals simulated in epochs *<sup>i</sup>* <sup>≥</sup> *<sup>N</sup>*/2 has a peak at frequency <sup>4</sup>*<sup>π</sup>* 5 .

**Scheme 3**: Similar to Scheme 1, the *di*-variate process in the *i*th epoch *Xi*,*<sup>t</sup>* = (*X*1,*i*,*t*, *X*2,*i*,*t*, ... , *Xdi*,*i*,*t*) are where each *Xk*,*i*,*<sup>t</sup>* are independently generated univariate stationary AR(2) process given by

$$X\_{k,i,t} = \phi\_{i,1} X\_{k,i,t-1} + \phi\_{i,2} X\_{k,i,t-2} + \epsilon\_{k,i,t}$$

*<sup>φ</sup>i*,1 <sup>=</sup> <sup>2</sup>*ξ<sup>i</sup>* cos(*θi*), *<sup>φ</sup>i*,2 <sup>=</sup> <sup>−</sup>*ξ*<sup>2</sup> *<sup>i</sup>* . The *di* × *di* variance matrix of the Gaussian noise *i*,*<sup>t</sup>* is given by

$$V(\varepsilon\_{i,t}) = \begin{bmatrix} 1 & \rho & \rho^2 & \dots & \rho^{p\_i - 1} \\ \rho & 1 & \rho & \dots & \rho^{p\_i - 2} \\ \vdots & & & & \\ \rho^{p\_i - 1} & \rho^{p\_i - 2} & \rho^{p\_i - 3} & \dots & 1 \end{bmatrix}$$

*ρ* = 0.4 and *k* = 1, 2, ... , *di*, *i* = 1, 2, ... , *N* = 500, *t* = 1, 2, ... , *T* = 1000. The dimension *di* for *Xi*,*<sup>t</sup>* is randomly chosen from {2, 3, . . . , 30}. Here again, *ξ<sup>i</sup>* ∼ *U*(0.8, 0.98) and *θ<sup>i</sup>* is given by

$$\theta\_i = \begin{cases} \cos(\frac{4\pi}{\frac{\pi}{2}}) & \text{if } i < \frac{N}{2} \\\cos(\frac{4\pi}{\frac{\pi}{2}}) & \text{if } i \ge \frac{N}{2} \end{cases}$$

**Scheme 4**: Here we let ∑*<sup>N</sup> <sup>i</sup>*=<sup>1</sup> *di* = 30 and follow Scheme 1 in generating the *N* process *X*1,*t*, *X*2,*t*, ... , *XN*,*T*, for *t* = 1, 2, ... , *T* = 1000 and *i* = 1, 2, ... , *N* = 500. Then we obtain *Mi*,*<sup>t</sup>* = *AiXt* where *Ai* = 1(*i*<sup>&</sup>lt; *<sup>N</sup>* <sup>2</sup> ) *<sup>I</sup>di <sup>A</sup>*<sup>1</sup> <sup>+</sup> <sup>1</sup>(*i*<sup>≥</sup> *<sup>N</sup>* <sup>2</sup> ) *<sup>I</sup>di <sup>A</sup>*<sup>2</sup> and *<sup>A</sup>*<sup>1</sup> and *<sup>A</sup>*<sup>2</sup> are two 30 <sup>×</sup> 30 randomly generated orthogonal matrices and *<sup>I</sup>di* is the *di* <sup>×</sup> *<sup>d</sup>* matrix containing the first *di* rows of the identity matrix *<sup>I</sup>*30. We consider *Mi*,*<sup>t</sup>* <sup>∈</sup> <sup>R</sup>*di* and study the spread of spectral properties across the *N* = 500 epochs.

Tables A1 and A2 contain the numerical summaries of the FS-ratio statistic over 100 replications of Scheme 1. Please note that the phase parameter *θ<sup>i</sup>* for *i* < *N*/2 in Scheme 1 is at 4*π*/25 on a (0, *π*) scale or equivalently at 0.0796 on a (0, 0.5) scale. We see from Table A1 that almost all of the spectral information is contained in the first two chosen frequency ranges around this peak. Similarly for *i* ≥ *N*/2, the phase parameter is at 4*π*/5 on a (0, *π*) scale or equivalently at 0.3981 on a (0, 0.5) scale. Figure A1 plots a histogram density of the FS-ratio statistic from the 100 replications and similar histogram densities for Schemes 2, 3 and 4 can be found in Figures A2–A4. From Table A2 we notice that the last two chosen frequency ranges have all of the spectral information.

**Figure A1. Scheme 1**: Histogram density of the FS-ratio statistic for different frequency ranges (*a*, *b*) ⊂ (0, 0.5).

**Table A1. Scheme 1, epochs 1–249**: Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for epochs *i* = 1, 2, ... , 249 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Table A2. Scheme 1, epochs 250–500**: Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for epochs *i* = 250, 2, ... , 500 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


Tables A3 and A4 include numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 2. Similar to Scheme 1, the phase parameter *θ<sup>i</sup>* for *i* < *N*/2 is at 0.0796 on a (0, 0.5) scale for *i* < *N*/2 and at 0.3981 on a (0, 0.5) scale for *i* ≥ *N*/2. The results from Table A3 indicate most of the spectral information are present in the first two chosen frequency ranges. Similarly for *i* ≥ *N*/2, Table A4 shows that the last two chosen frequency ranges have all of the spectral information.

**Table A3. Scheme 2, epochs 1–249**: Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for epochs *i* = 1, 2, . . . , 249 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Table A4. Scheme 2, epochs 250–500**: Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for epochs *i* = 250, 2, . . . , 500 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Figure A2. Scheme 2**: Histogram density of the FS-ratio statistic for different frequency ranges (*a*, *b*) ⊂ (0, 0.5).

Tables A5 and A6 contain the numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 3. As in Scheme 1, the phase parameter *θ<sup>i</sup>* for *i* < *N*/2 is at 0.0796 on a (0, 0.5) scale for *i* < *N*/2 and at 0.3981 on a (0, 0.5) scale for *i* ≥ *N*/2. As in Scheme 1, results from Table A5 indicate most of the spectral information are present in the first two chosen frequency ranges. Similarly for *i* ≥ *N*/2, Table A6 shows that the last two chosen frequency ranges have all of the spectral information.



**Table A6. Scheme 3, epochs 250–500**: Numerical summaries of FS-ratio statistic *RXi*,*a*,*<sup>b</sup>* for epochs *i* = 250, 2, ... , 500 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Figure A3. Scheme 3**: Histogram density of the FS-ratio statistic for different frequency ranges (*a*, *b*) ⊂ (0, 0.5).

Tables A7 and A8 contain the numerical summaries of the FS-ratio statistic over 100 replications of the model in Scheme 4. Here we look at *Mi*,*<sup>t</sup>* = *AiXi*,*<sup>t</sup>* which is a mixture of the components of *Xi*,*<sup>t</sup>* generated as in Scheme 1. Please note that the peak of the spectral densities of the components of *Mi*,*<sup>t</sup>* is still at the phase parameter *θ<sup>i</sup>* defined in Scheme 1. Hence, the results from Tables A7 and A8 are similar to the results from Scheme 1.

**Table A7. Scheme 4, epochs 1–249**: Numerical summaries of FS-ratio statistic *RMi*,*t*,*a*,*<sup>b</sup>* for epochs *i* = 1, 2, ... , 249 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Table A8. Scheme 4, epochs 250–500**: Numerical summaries of FS-ratio statistic *RMi*,*t*,*a*,*<sup>b</sup>* for epochs *i* = 250, 2, ... , 500 for specified frequency ranges (*a*, *b*). Here (*a*, *b*) ⊂ (0, 0.5) and (0, 0.5) corresponds to the interval (0, *π*).


**Figure A4. Scheme 4**: Histogram density of the FS-ratio statistic for different frequency ranges (*a*, *b*) ⊂ (0, 0.5).

#### **Appendix B. Proofs**

Here we present the proofs of the theoretical results in Section 2.1.2.

**Proof of Theorem 1 (a).** Recall that for some 0 < *a* < *b* < *π*,

$$\begin{split} \hat{r}\_{\boldsymbol{X},\boldsymbol{a},\boldsymbol{b}} &= \int\_{a}^{b} ||\text{vec}(\hat{f}\_{\boldsymbol{X}}(\omega))||\_{2}^{2} d\omega = \int\_{a}^{b} ||\frac{1}{T} \sum\_{j=-\lfloor T/2 \rfloor}^{\lfloor T/2 \rfloor} K\_{\boldsymbol{h}}(\omega - \omega\_{\boldsymbol{j}}) \text{vec}(I\_{\boldsymbol{X},T}(\omega\_{\boldsymbol{j}})) ||^{2} \, d\omega \\ &= \int\_{a}^{b} \frac{1}{T^{2}} \sum\_{j\_{1},j\_{2}=-\lfloor T/2 \rfloor}^{\lfloor T/2 \rfloor} K\_{\boldsymbol{h}}(\omega - \omega\_{j\_{1}}) K\_{\boldsymbol{h}}(\omega - \omega\_{j\_{2}}) \sum\_{r,s=1}^{d\_{i}} I\_{\boldsymbol{X},T,rs}(\omega\_{j\_{1}}) \overline{I\_{\boldsymbol{X},T,rs}(\omega\_{j\_{2}})} \, d\omega . \end{split}$$

We first consider the expected value of this quantity.

$$\begin{split} E(\hat{r}\_{\mathbf{X},a,b}) &= \int\_{a}^{b} \frac{1}{T^{2}} \sum\_{j\_{1},j\_{2}=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor} \mathcal{K}\_{\mathbf{h}}(\omega-\omega\_{j\_{1}}) \mathcal{K}\_{\mathbf{h}}(\omega-\omega\_{j\_{2}}) \sum\_{r,s=1}^{d\_{i}} E\left(I\_{\mathbf{X},T,rs}(\omega\_{j\_{1}}) \overline{I\_{\mathbf{X},T,rs}(\omega\_{j\_{2}})}\right) d\omega \\ &= \int\_{a}^{b} \frac{1}{T^{2}} \sum\_{j\_{1},j\_{2}=-\lfloor T/2\rfloor}^{\lfloor T/2\rfloor} \mathcal{K}\_{\mathbf{h}}(\omega-\omega\_{j\_{1}}) \mathcal{K}\_{\mathbf{h}}(\omega-\omega\_{j\_{2}}) \sum\_{r,s=1}^{d\_{i}} f\_{\mathbf{X},rs}(\omega\_{j\_{1}}) \overline{f\_{\mathbf{X},rs}(\omega\_{j\_{2}})} \, d\omega \, + \ o(1) .\end{split}$$

It can be seen that as *T* → ∞, *h* → 0 and *Th* → ∞ the above quantity converges to

$$\int\_{a}^{b} \int\_{r,s=1}^{d\_i} \left(\int\_{-\pi}^{\pi} K(v) d\upsilon\right)^2 f\_{\chi\_{r\mathfrak{F}}}(\omega) \,\overline{f\_{\chi\_{r\mathfrak{F}}}(\omega)} \,d\omega \, = \int\_{a}^{b} \sum\_{r,s=1}^{d\_i} f\_{\chi\_{r\mathfrak{F}}}(\omega) \,\overline{f\_{\chi\_{r\mathfrak{F}}}(\omega)} \,d\omega.$$

Next, for the variance we have *<sup>V</sup>*(*rX*,*a*,*b*) = *<sup>A</sup>*<sup>1</sup> <sup>−</sup> *<sup>A</sup>*2, where

$$A\_{1} = \int\_{a}^{b} \int\_{a}^{b} \frac{1}{T^{4}} \sum\_{j\_{1}, j\_{2}, j\_{3}, \neq} K\_{h}(\omega - \omega\_{j\_{1}}) K\_{h}(\omega - \omega\_{j\_{2}}) K\_{h}(\lambda - \omega\_{j\_{3}}) K\_{h}(\lambda - \omega\_{j\_{4}}) \times \cdots$$

$$\sum\_{r, s, l, u = 1}^{d\_{i}} \operatorname{E} \Big( I\_{X, T, rs}(\omega\_{j\_{1}}) \overline{I\_{X, T, rs}(\omega\_{j\_{2}})} I\_{X, T, tu}(\omega\_{j\_{3}}) \overline{I\_{X, T, tu}(\omega\_{j\_{4}})} \Big) \, d\omega \, d\lambda \text{ and}$$

$$A\_{2} = \int\_{a}^{b} \int\_{a}^{b} \frac{1}{T^{4}} \sum\_{j\_{1}, j\_{2}, j\_{3}, \neq} K\_{h}(\omega - \omega\_{j\_{1}}) K\_{h}(\omega - \omega\_{j\_{2}}) K\_{h}(\lambda - \omega\_{j\_{3}}) K\_{h}(\lambda - \omega\_{j\_{4}}) \times$$

$$\sum\_{r, s, l, u = 1}^{d\_{i}} \operatorname{E} \Big( I\_{X, T, rs}(\omega\_{j\_{1}}) \overline{I\_{X, T, rs}(\omega\_{j\_{2}})} \Big) \operatorname{E} \Big( I\_{X, T, tu}(\omega\_{j\_{3}}) \overline{I\_{X, T, tu}(\omega\_{j\_{4}})} \Big) \, d\omega \, d\lambda \,.$$

For the difference in the expectations between *A*<sup>1</sup> and *A*<sup>2</sup> we discuss the relevant cases and their convergence to 0. Firstly, it can be seen that for the following three cases the difference in the expectations is asymptotically 0: (a). *ωj*<sup>1</sup> = *ωj*<sup>2</sup> = *ωj*<sup>3</sup> = *ωj*<sup>4</sup> , (b). *ωj*<sup>1</sup> = *ωj*<sup>2</sup> = *ωj*<sup>3</sup> = *ωj*<sup>4</sup> , (c). *ωj*<sup>1</sup> = *ωj*<sup>2</sup> = *ωj*<sup>3</sup> = *ωj*<sup>4</sup> . Next, when *ωj*<sup>1</sup> = *ωj*<sup>3</sup> = *ωj*<sup>2</sup> = *ωj*<sup>4</sup> we have,

 *b a b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*<sup>2</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>1</sup> )*Kh*(*λ* − *ωj*<sup>2</sup> ) *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> )× *IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>2</sup> ) − *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>2</sup> ) *d<sup>ω</sup> <sup>d</sup><sup>λ</sup>* = *b a b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*<sup>2</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>1</sup> )*Kh*(*λ* − *ωj*<sup>2</sup> ) *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>1</sup> ) × *E IX*,*T*,*rs*(*ωj*<sup>2</sup> )*IX*,*T*,*tu*(*ωj*<sup>2</sup> ) − *E IX*,*T*,*rs*(*ωj*<sup>1</sup> ) *E IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>1</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>2</sup> ) *d<sup>ω</sup> <sup>d</sup><sup>λ</sup>* <sup>+</sup> *<sup>o</sup>*(1) <sup>=</sup> <sup>1</sup> *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*<sup>2</sup> *b a Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> ) *dω* <sup>2</sup> *di* ∑*r*,*s*,*t*,*u*=1 *fX*,*rt*(*ωj*<sup>1</sup> )*fX*,*su*(*ωj*<sup>1</sup> ) + *fX*,*rs*(*ωj*<sup>1</sup> )*fX*,*tu*(*ωj*<sup>1</sup> ) × *fX*,*rt*(*ωj*<sup>2</sup> ) *fX*,*su*(*ωj*<sup>2</sup> ) + *fX*,*rs*(*ωj*<sup>2</sup> ) *fX*,*tu*(*ωj*<sup>2</sup> ) − *fX*,*rs*(*ωj*<sup>1</sup> )*fX*,*rs*(*ωj*<sup>2</sup> )*fX*,*tu*(*ωj*<sup>1</sup> )*fX*,*tu*(*ωj*<sup>2</sup> ) <sup>+</sup> *<sup>o</sup>*(1) = <sup>1</sup> *<sup>T</sup>*4*h*<sup>2</sup> ∑ *j*1,*j*<sup>2</sup> *b*−*ωj* 1 *h a*−*ωj* 1 *h <sup>K</sup>*(*u*)*K*(*<sup>u</sup>* <sup>+</sup> *<sup>ω</sup>j*<sup>1</sup> <sup>−</sup> *<sup>ω</sup>j*<sup>2</sup> *<sup>h</sup>* )*du* <sup>2</sup> *di* ∑*r*,*s*,*t*,*u*=1 ··· <sup>=</sup> *<sup>O</sup>*( <sup>1</sup> *T*2*h* ).

The case when *ωj*<sup>1</sup> = *ωj*<sup>2</sup> = *ωj*<sup>3</sup> = *ωj*<sup>4</sup> would have the same rate of decay as above. Next, when *ωj*<sup>1</sup> = *ωj*<sup>3</sup> = *ωj*<sup>2</sup> = *ωj*<sup>4</sup> we have,

 *b a b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*2,*j*<sup>4</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>1</sup> )*Kh*(*λ* − *ωj*<sup>4</sup> ) *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> )× *IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>4</sup> ) − *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>4</sup> ) *d<sup>ω</sup> <sup>d</sup><sup>λ</sup>* = *b a b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*2,*j*<sup>4</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>1</sup> )*Kh*(*λ* − *ωj*<sup>4</sup> ) *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>1</sup> ) × *E IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>4</sup> ) − *E IX*,*T*,*rs*(*ωj*<sup>1</sup> ) *E IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>1</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>4</sup> ) *d<sup>ω</sup> <sup>d</sup><sup>λ</sup>* <sup>+</sup> *<sup>o</sup>*(1) <sup>=</sup> <sup>1</sup> *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*2,*j*<sup>4</sup> *b a Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> ) *dω b a Kh*(*λ* − *ωj*<sup>1</sup> )*Kh*(*λ* − *ωj*<sup>4</sup> ) *dλ di* ∑*r*,*s*,*t*,*u*=1 *fX*,*rt*(*ωj*<sup>1</sup> )<sup>×</sup> *fX*,*su*(*ωj*<sup>1</sup> ) + *fX*,*rs*(*ωj*<sup>1</sup> )*fX*,*tu*(*ωj*<sup>1</sup> ) × *fX*,*rs*(*ωj*<sup>2</sup> ) *fX*,*tu*(*ωj*<sup>4</sup> ) − *fX*,*rs*(*ωj*<sup>1</sup> )*fX*,*rs*(*ωj*<sup>2</sup> )*fX*,*tu*(*ωj*<sup>1</sup> )*fX*,*tu*(*ωj*<sup>4</sup> ) <sup>+</sup> *<sup>o</sup>*(1) = <sup>1</sup> *<sup>T</sup>*4*h*<sup>2</sup> ∑ *j*1,*j*2,*j*<sup>4</sup> *b*−*ωj* 1 *h a*−*ωj* 1 *h <sup>K</sup>*(*u*)*K*(*<sup>u</sup>* <sup>+</sup> *<sup>ω</sup>j*<sup>1</sup> <sup>−</sup> *<sup>ω</sup>j*<sup>2</sup> *<sup>h</sup>* ) *du b*−*ωj* 1 *h a*−*ωj* 1 *h <sup>K</sup>*(*v*)*K*(*<sup>v</sup>* <sup>+</sup> *<sup>ω</sup>j*<sup>1</sup> <sup>−</sup> *<sup>ω</sup>j*<sup>4</sup> *<sup>h</sup>* ) *dv* × *di* ∑*r*,*s*,*t*,*u*=1 ··· + *o*(1) = *O*( 1 *T* ).

Finally, we look at the case *ωj*<sup>1</sup> = *ωj*<sup>2</sup> = *ωj*<sup>3</sup> = *ωj*<sup>4</sup> . We have

 *b a b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1 *K*2 *<sup>h</sup>*(*<sup>ω</sup>* <sup>−</sup> *<sup>ω</sup>j*<sup>1</sup> )*K*<sup>2</sup> *<sup>h</sup>*(*λ* − *ωj*<sup>1</sup> ) *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>1</sup> )× *IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>1</sup> ) − *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>1</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>1</sup> )*IX*,*T*,*tu*(*ωj*<sup>1</sup> ) *d<sup>ω</sup> <sup>d</sup><sup>λ</sup>* <sup>=</sup> <sup>1</sup> *<sup>T</sup>*<sup>4</sup> ∑ *j*1 *b a K*2 *<sup>h</sup>*(*ω* − *ωj*<sup>1</sup> ) *dω* <sup>2</sup> *di* ∑*r*,*s*,*t*,*u*=1 ··· <sup>=</sup> <sup>1</sup> *<sup>T</sup>*4*h*<sup>4</sup> ∑ *j*1 *b a K*2( *ω* − *ωj*<sup>1</sup> *<sup>h</sup>* )*d<sup>ω</sup>* <sup>2</sup> *di* ∑*r*,*s*,*t*,*u*=1 ··· <sup>=</sup> <sup>1</sup> *<sup>T</sup>*4*h*<sup>2</sup> ∑ *j*1 *b*−*ωj* 1 *h a*−*ωj* 1 *h K*2(*u*)*du* <sup>2</sup> *di* ∑*r*,*s*,*t*,*u*=1 ··· <sup>=</sup> *<sup>O</sup>*( <sup>1</sup> *<sup>T</sup>*3*h*<sup>2</sup> )

**Proof of Theorem 1 (b).** First, we observe that

$$\hat{\mathcal{R}}\_{X,a,b} = \frac{\int\_a^b ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}{\int\_0^\pi ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega} = \frac{\int\_a^b ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}{\int\_a^b ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega + \int\_{\Pi\_{(a,b)}} ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}$$

$$= \left(1 + \frac{\int\_{\Pi\_{(a,b)}} ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}{\int\_a^b ||\text{vec}(\hat{f}\_X(\omega))||\_2^2 d\omega}\right)^{-1}.\tag{A1}$$

A sufficient condition for joint consistency of (*rX*,*a*,*b*,*rX*,Π(*a*,*b*) ). Following the proof of Theorem 1 (a), we have *cov*(*rX*,*a*,*b*,*rX*,Π(*a*,*b*) ) = *C*<sup>1</sup> − *C*2, where

*C*<sup>1</sup> = Π(*a*,*b*) *b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*2,*j*3,*j*<sup>4</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>3</sup> )*Kh*(*λ* − *ωj*<sup>4</sup> ) × *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> )*IX*,*T*,*tu*(*ωj*<sup>3</sup> )*IX*,*T*,*tu*(*ωj*<sup>4</sup> ) *dω dλ* and *C*<sup>2</sup> = Π(*a*,*b*) *b a* 1 *<sup>T</sup>*<sup>4</sup> ∑ *j*1,*j*2,*j*3,*j*<sup>4</sup> *Kh*(*ω* − *ωj*<sup>1</sup> )*Kh*(*ω* − *ωj*<sup>2</sup> )*Kh*(*λ* − *ωj*<sup>3</sup> )*Kh*(*λ* − *ωj*<sup>4</sup> ) × *di* ∑*r*,*s*,*t*,*u*=1 *E IX*,*T*,*rs*(*ωj*<sup>1</sup> )*IX*,*T*,*rs*(*ωj*<sup>2</sup> ) *E IX*,*T*,*tu*(*ωj*<sup>3</sup> )*IX*,*T*,*tu*(*ωj*<sup>4</sup> ) *dω dλ*.

As in the proof of Theorem 1 (a), it can be seen that, for the various cases, the covariance terms are of *O*( <sup>1</sup> *<sup>T</sup>δ*<sup>1</sup> *<sup>h</sup>δ*<sup>2</sup> ) where *<sup>δ</sup>*1, *<sup>δ</sup>*<sup>2</sup> ∈ {0, 1, 2, 3} and *<sup>δ</sup>*<sup>1</sup> <sup>&</sup>gt; *<sup>δ</sup>*2. The result above along with Theorem <sup>1</sup> implies

$$\left(\widehat{r}\_{\mathcal{X},a,b,\prime}\widehat{r}\_{\mathcal{X},\prod\_{(a,b)}}\right)^{\top} \stackrel{P}{\rightarrow} \left(r\_{\mathcal{X},a,b,\prime}r\_{\mathcal{X},\prod\_{(a,b)}}\right)^{\top} .$$

Finally, an application of the continuous mapping theorem yields the result.

#### **References**



**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
