Next Article in Journal
A Revisit of Large-Scale Patterns in Middle Stratospheric Circulation Variations
Previous Article in Journal
Kolmogorov–Arnold and Long Short-Term Memory Convolutional Network Models for Supervised Quality Recognition of Photoplethysmogram Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Methods for Brain Connectivity Analysis with Applications to Rat Local Field Potential Recordings

1
Statistics Program, King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
2
Department of Mathematics, Ateneo de Manila University, Quezon City 1108, Philippines
3
Department of Neurobiology and Behavior, University of California, Irvine, CA 92697, USA
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(4), 328; https://doi.org/10.3390/e27040328
Submission received: 27 January 2025 / Revised: 10 March 2025 / Accepted: 17 March 2025 / Published: 21 March 2025

Abstract

:
Modeling the brain dependence network is central to understanding underlying neural mechanisms such as perception, action, and memory. In this study, we present a broad range of statistical methods for analyzing dependence in a brain network. Leveraging a combination of classical and cutting-edge approaches, we analyze multivariate hippocampal local field potential (LFP) time series data concentrating on the encoding of nonspatial olfactory information in rats. We present the strengths and limitations of each method in capturing neural dynamics and connectivity. Our analysis begins with exploratory techniques, including correlation, partial correlation, spectral matrices, and coherence, to establish foundational connectivity insights. We then investigate advanced methods such as Granger causality (GC), robust canonical coherence analysis, spectral transfer entropy (STE), and wavelet coherence to capture dynamic and nonlinear interactions. Additionally, we investigate the utility of topological data analysis (TDA) to extract multi-scale topological features and explore deep learning-based canonical correlation frameworks for connectivity modeling. This comprehensive approach offers an introduction to the state-of-the-art techniques for the analysis of dependence networks, emphasizing the unique strengths of various methodologies, addressing computational challenges, and paving the way for future research.

1. Introduction

Knowledge of the brain structure, function, and mechanisms of underlying neural processes have advanced significantly in recent decades [1,2]. These breakthroughs have been driven by the rapid development of brain imaging techniques, including functional magnetic resonance imaging (fMRI), electroencephalography (EEG), electrocorticography (ECoG), local field potentials (LFPs), and calcium imaging, among others [3]. Equally important are advances in statistical and computational methodologies, enabling the efficient estimation and robust analysis of the complex datasets generated by these imaging techniques [4,5].
Neuroscientists have dedicated immense efforts to understanding both localized brain region function and integration across brain regions during resting state and while responding to external stimuli. It is indeed paramount to estimate and analyze the connectivity patterns between signals, recorded at different brain regions, to uncover the neural mechanisms that underlie perception, action, and cognition [6]. These connectivity patterns not only reveal the strength of connectivity between different brain regions but can also provide information on how disruptions can lead to neurological disorders [7,8,9].
The hippocampus plays a pivotal role in memory formation, spatial navigation, and information processing [10,11,12]. Its importance as a hub for neural connectivity makes it an essential target for studying brain networks. To investigate hippocampal function, researchers often rely on animal models (e.g., macaques and rats) whose brain structures bear a strong resemblance to those of humans [13]. These models allow for precise and invasive experimental designs that are considered unethical in human studies.
This paper explores some advanced methods commonly used to analyze brain networks. We apply them to hippocampal LFP data from the CA1 region to investigate the encoding of nonspatial olfactory information in rats. Our aim is to introduce these methods to readers while highlighting the insights they can offer into neural connectivity. We will present and discuss the differences between classical methods and state-of-the-art approaches in modeling brain connectivity. Beyond showcasing the strengths, we also discuss their limitations and identify areas for potential future improvement. By presenting a wide set of techniques, we provide readers with tools to analyze different aspects of brain connectivity, offering diverse perspectives and insights into complex neural systems.
The structure of this paper is organized as follows: Section 2 introduces the dataset and provides formal definitions of foundational concepts such as correlation, partial correlation, and coherence, which are essential for understanding basic connectivity patterns. Section 3 details robust canonical coherence, a method for assessing more complex interdependencies. Section 4 presents a hybrid approach combining Spectral Dynamic Principal Component Analysis (sDPCA) with Granger causality (GC) to analyze directional influences among specific channels and mitigate confounding effects from the broader network. Section 5 explores spectral transfer entropy (STE), an information-theoretic method that examines frequency-specific influence and information flow in the brain. Section 6 discusses wavelet coherence, which captures dynamic and nonlinear interactions between brain regions. Section 7 introduces Persistence Homology (PH), a topological method that avoids the need for thresholding in weighted networks and extracts multi-scale connectivity patterns, thus detecting higher-order interactions. Section 8 summarizes and discusses the strengths and limitations of the methods presented, emphasizing their potential for enhancing brain connectivity analysis and identifying promising directions for future research. Finally, Section 9 offers concluding remarks.

2. Exploratory Data Analysis

In this section, we turn our attention to the experimental data that underpin our analyses. The dataset consists of local field potential (LFP) recordings from the CA1 region of the hippocampus in rats performing a nonspatial sequence memory task, a paradigm chosen for its strong behavioral parallels between rats and humans. As detailed in Allen et al. [14], LFP signals were recorded from five male Long–Evans rats. The animals were individually housed, with water access controlled during weekdays (serving as a reward in odor memory tasks).
The experimental protocol involved a nonspatial sequence memory task, in which rats were required to memorize and recognize a fixed sequence of five odors: lemon (A), anise (B), rum (C), vanilla (D), and banana (E). Rats underwent an incremental training protocol over 6–8 weeks. Initially, naive rats were trained to nosepoke and maintain their nose in the odor port for a water reward. The required nosepoke duration was gradually increased from 50 ms in 15 ms steps until reaching 1.2 s, with a criterion of 80% correct responses over three consecutive sessions (100–200 nosepokes per session). Subsequently, the animals were habituated to odor presentations, first with a single odor (Odor A) and then with a two-odor sequence (Odors A and B), both requiring a 1.2 s nosepoke for a reward. Once performance was stabilized, the rats were trained to discriminate between in-sequence and out-of-sequence presentations, starting with a two-item sequence (e.g., “AB” for in-sequence versus “AA” for out-of-sequence) and progressing to sequences of three, four, and finally five odors. After achieving criterion performance on the five-item sequence, the rats underwent microdrive implantation surgery for subsequent electrophysiological recordings. The odors were delivered through a single odor port as described in Figure 1, with each session featuring odors presented either in the correct sequential order or with at least one item out-of-sequence.
In this study, the dataset is organized into four-second trials, with odor presentation occurring at the midpoint (two seconds) and initiated by a nosepoke. LFP signals were recorded at a sampling rate of 1000 Hz from five rats using a microdrive equipped with approximately 20–22 tetrodes per rat, each positioned in either the proximal or distal region of the CA1 layer. On average, each rat completed between 170 and 300 trials (approximately 170–260 in-sequence and 20–45 out-of-sequence trials). Notably, the ‘Barat’ rat demonstrated the highest accuracy in recognizing in-sequence odors, while ‘Superchris’ excelled in identifying out-of-sequence presentations. Detailed variations in the number of tetrodes and trials across subjects are provided in Table 1.

2.1. Classical Dependence Measures

For the remainder of this manuscript, we adopt the following unified notation to facilitate the presentation of analytical methods. Consider LFP signals measured over time from P tetrodes. Let X p , t represent the LFP signal recorded from the p-th tetrode at time t, where p = 1 , , P . For multiple trials of the same experiment, we use the superscript notation X p , t ( r ) to denote the r-th trial. For clarity, we refer to tetrodes 1 , 2 , , P as T 1 , T 2 , , T P , respectively.
Functional connectivity (FC) refers to the statistical association between neurophysiological events measured across various scales, microscale (individual neurons), mesoscale (neuronal populations), and macroscale (brain regions) [15]. In the context of brain connectivity analysis, FC is almost always measured using Pearson correlations [16]. In this study, we focus on LFP signals, which serve as mesoscale measurements that capture the collective activity of multiple neurons. Specifically, we investigate FC between tetrodes using correlation and partial correlation as defined below.
Consider the LFP signals recorded during the r-th trial from two tetrodes p and q, denoted by { X p , t ( r ) } and { X q , t ( r ) } . For this section, we assume the LFP during trial r { X p , t ( r ) } and { X q , t ( r ) } to be the zero-mean second-order stationary time series (see [17] for definition). Then, the cross-covariance between X p , t ( r ) and X q , t ( r ) at time delay k is written as
σ p q ( r ) ( k ) = Cov X p , t ( r ) , X q , t + k ( r ) = E X p , t ( r ) X q , t + k ( r ) .
Given P tetrodes in the system, all pairwise covariances can be compactly written as a P × P cross-covariance matrix at lag k, denoted as Σ ( r ) ( k ) , i.e.,
Σ ( r ) ( k ) = σ 11 ( r ) ( k ) σ 12 ( r ) ( k ) σ 1 P ( r ) ( k ) σ 21 ( r ) ( k ) σ 22 ( r ) ( k ) σ 2 P ( r ) ( k ) σ P 1 ( r ) ( k ) σ P 2 ( r ) ( k ) σ P P ( r ) ( k )
Although cross-covariance quantifies the dependence between two time series, it is often difficult to interpret because its magnitude depends on the scale (level of variability) of the data. Thus, it is more common to use its scaled version, called cross-correlation or simply correlation, which takes values in the interval [ 1 , 1 ] . This is more useful especially when comparing strengths of connectivity across different tetrode pairs. More precisely, the correlation between X p , t ( r ) and X q , t + k ( r ) is defined as
ρ p q ( r ) ( k ) = Cov X p , t ( r ) , X q , t + k ( r ) Var X p , t ( r ) Var X q , t ( r ) = σ p q ( r ) ( k ) σ p p ( r ) ( 0 ) · σ q q ( r ) ( 0 ) .
The correlation index ρ p q ( r ) ( k ) measures the linear association between the signals X p , t ( r ) and X q , t + k ( r ) . Moreover, when the LFPs have a normal distribution, ρ p q ( r ) ( k ) = 0 implies unconditional independence between them. However, one limitation is that it may include confounding variables that influence the interaction between X p , t ( r ) and X q , t ( r ) , e.g., another signal from the same system, say X v , t ( r ) . Thus, an alternative approach is to quantify the direct dependence between a pair of signals after taking into account the contributions of other components in the brain network. This is offered by the partial correlation measure, which we define below.
Define v to be the set of P 2 tetrodes excluding the p-th and q-th tetrodes, and X v , t ( r ) to be the multivariate time series recorded during the r-th trial from all tetrodes in v . Note that X p , t ( r ) and X q , t ( r ) are excluded in X v , t ( r ) . Consider the variance–covariance matrix at lag 0, which can be derived from Equation (2), and denote it by Σ ( r ) = Σ ( r ) ( 0 ) . The precision matrix, denoted by Θ ( r ) , is the inverse of Σ ( r ) , i.e.,
Θ ( r ) = Σ ( r ) 1 = Θ 11 ( r ) Θ 12 ( r ) Θ 1 P ( r ) Θ 21 ( r ) Θ 22 ( r ) Θ 2 P ( r ) Θ P 1 ( r ) Θ P 2 ( r ) Θ P P ( r )
Then, the partial correlation between two tetrodes X p , t and X q , t , after removing the linear contributions of the remaining tetrodes in the system X v , t , is defined to be
ρ p q v ( r ) = Θ p q ( r ) Θ p p ( r ) · Θ q q ( r ) .
The quantity Θ p q ( r ) represents the element in the p-th row, q-th column of the precision matrix, which in practice, may be obtained as the inverse of an estimated variance–covariance matrix. A caveat, however, is that the covariance matrix should be positive definite (and hence non-singular) for the precision matrix to exist. In cases of perfect collinearity between at least one pair of signals, the covariance matrix is singular, preventing the signals from being de-confounded.

2.1.1. Permutation Test

For a given odor, consider the correlation between the LFP signals, recorded during the r-th trial, from tetrodes p and q and denote it by ρ p q ( r ) . Here, we compare the two groups of trials (in-sequence vs. out-of-sequence). Denote by { ρ p q ( 1 ) , ρ p q ( 2 ) } and { ν p q ( 1 ) , ν p q ( 2 ) } the respective true means of the correlations and variances across the entire distribution all possible realizations of in-sequence and out-of-sequence trials. Our goal is to determine whether there are differences in the mean correlations between in vs. out-of sequence states, e.g., n 1 in-sequence correct trials and n 2 out-of-sequence correct trials with zero time delay ( k = 0 ). Hence, for a given { p , q } -tetrode pair, we wish to test the following hypothesis:
H 0 : ρ p q ( 1 ) = ρ p q ( 2 ) vs . H a : ρ p q ( 1 ) ρ p q ( 2 ) .
We consider the test statistic
T stat = ρ ^ p q ( 1 ) ρ ^ p q ( 2 ) ν ^ p q ( 1 ) n 1 + ν ^ p q ( 2 ) n 2 ,
where { ρ ^ p q ( 1 ) , ρ ^ p q ( 2 ) } and { ν ^ p q ( 1 ) , ν ^ p q ( 2 ) } are the group sample averages and group sample variances, respectively, of the correlations. Given the observed data, let t obs denote the calculated value of the test statistics T stat . As a decision rule, we reject the null hypothesis H 0 if the p-value, i.e., p = Pr ( | T stat | | t obs | H 0 is true ) , is less than the significance level α .
One approach is to empirically derive the unknown distribution of T stat under the null and thus estimate the p-value through a permutation testing scheme. Under the null hypothesis, the correlations ρ p q ( r ) from all correct trials, whether in-sequence or out-of-sequence, come from the same distribution. Such an assumption allows the reassignment or relabeling of the correlations as in-sequence or out-of-sequence, which corresponds to one permutation. In an iterative manner, several permutations of labels for the observed correlations are obtained, and for each permutation, a t obs is calculated. The collection of t obs values comprises the empirical null distribution of T stat , from which we obtain the p-value for the two-sample t-test.

2.1.2. Correlation Analysis of the LFP Dataset

We now implement the permutation test (discussed above) on the correlations of LFP signals from the rat named “Superchris” for in-sequence and out-of-sequence trials of all combinations of odors and pairs of tetrodes (Figure 2). In particular, we examine trials where the odor presented is vanilla. For recordings from T11 and T21, there is a significant difference between the mean correlations of the in-sequence trials and the out-of-sequence trials at α = 0.05 , with t obs 2.3148 and p 0.0342 .
The same analysis can be performed for partial correlation (Figure 3), accounting for the confounding variable results in sparse correlation matrices. For rum and tetrode-pair T5–T20, mean partial correlations between in-sequence and out-of-sequence trials have a significant difference ( t stat 6.8059 ; p 0.0007 ; α = 0.05 ), revealing a change in interaction depending on the accuracy of the odor sequence.

2.2. Spectral Dependence Measures

Correlation and partial correlation are simple yet effective measures for capturing the linear dependence between signals in the time domain. In contrast, assessing synchronization in the frequency domain provides a more detailed understanding of the oscillatory dynamics that drive neural interactions. Coherence analysis, the frequency-domain counterpart of correlation, has proven highly effective in evaluating brain connectivity by yielding results that are directly interpretable in terms of frequency components [18].
Let X t = [ X 1 , t ( r ) , , X P , t ( r ) ] be a P-dimensional second-order stationary time series, meaning that its mean vector,
E [ X t ] = μ X ,
remains constant over time, and its covariance matrix,
Cov ( X t , X t + k ) = Σ ( k ) ,
depends only on the lag k rather than the specific time index t. In addition, we assume that the elements of Σ ( k ) are absolutely summable:
k = | σ p q ( k ) | < p , q .
These conditions ensure the existence of a well-defined spectral matrix for X t .
The essence of the spectral analysis is to represent brain signals as a superposition of oscillatory components across various frequency bands. This is achieved by decomposing the signal into complex exponentials, where
e i 2 π ω t = cos ( 2 π ω t ) + i sin ( 2 π ω t )
serves as the fundamental building block. This idea is formalized in the Cramér representation:
X t = π π e i 2 π ω t dZ ( ω ) ,
with dZ ( ω ) denoting a zero-mean, orthogonal increment process. This representation holds under the aforementioned stationarity conditions.
The spectrum of an individual component X p , t is defined as the Fourier transform of its autocovariance function:
S p p ( r ) ( ω ) = k = + σ p p ( r ) ( k ) e i 2 π ω k ,
and the cross-spectrum between components X p , t and X q , t is similarly given by the Fourier transform of their cross-covariance function:
S p q ( r ) ( ω ) = k = + σ p q ( r ) ( k ) e i 2 π ω k .
Collecting all auto- and cross-spectral quantities from the spectral matrix
S ( r ) ( ω ) = S 11 ( r ) ( ω ) S 12 ( r ) ( ω ) S 1 P ( r ) ( ω ) S 21 ( r ) ( ω ) S 22 ( r ) ( ω ) S 2 P ( r ) ( ω ) S P 1 ( r ) ( ω ) S P 2 ( r ) ( ω ) S P P ( r ) ( ω ) .
The spectral matrix can provide insight into the spectral power distribution of the signals. However, by examining dominant specific frequency bands that are present in the signals, one can develop a better understanding of the brain connectivity and mental state of a subject. Studies have demonstrated that the five traditional frequency bands are associated with cognitive states [19] and can also be used as potential biomarkers for neurological diseases (e.g., autism, and attention deficit-hyperactivity disorder (ADHD)) [20]. These frequency bands, which are defined below, can be adapted in the analysis of LFP signals.
The most common frequency bands of interest in EEG and LFP analysis are Ω 1 ( 0.5 , 4 ) Hertz, Ω 2 ( 4 , 8 ) Hertz, Ω 3 ( 8 , 12 ) Hertz, Ω 4 ( 12 , 30 ) Hertz, and Ω 5 ( 30 , 50 ) Hertz. There is a 1-1 mapping between the frequency band in the generic interval ( 0 , 0.50 ) and the bands in practical EEG/LFP, which is defined as follows. Let s be the sampling rate, and let ( ω L , ω H ) ( 0 , 0.50 ) be the generic band of interest. This corresponds to Ω = ( s . ω L , s . ω H ) . Using these bands of interest, we can decompose the observed LFP to be
X t = a 1 X t Ω 1 + a 2 X t Ω 2 + a 3 X t Ω 3 + a 4 X t Ω 4 + a 5 X t Ω 5 ,
where a j , j = 1 , , 5 are weights associated to the contribution of each frequency band in the signal. Additionally, X t Ω j can be derived from the observed signal via linear filtering (e.g., Butterworth filter)
X t Ω j = k = h ( k ) X t k
where the filter { h ( k ) } is selected so that the power of X t Ω j is concentrated at the frequency band Ω j . The filtered decomposition for a sample LFP signal from a trial is shown in Figure 4, at which the left-hand side showcases the spectral power and the right is the decomposed signals for each of the signals.
Dependence between tetrodes can be characterized via coherence which is a frequency domain measure of linear correlation between two signals of the same frequency band Ω j . For tetrodes p and q, the coherence at frequency ω is defined as
C p q Ω j ( ω ) = S p q ( Ω j ) ( ω ) 2 S p p ( Ω j ) ( ω ) S q q ( Ω j ) ( ω ) ; i = 1 , 2 , 3 , 4 , 5 ,
where S p p ( Ω j ) and S p q ( Ω j ) are the auto-spectrum and cross-spectrum at band Ω j . The values of coherence lie between 0 and 1, with 0 indicating that there is no linear correlation at that frequency and 1 indicating a perfect linear relationship at that frequency.
It is of interest to see that the resulting coherence is also clustered within the same tetrodes as in Figure 2. Nonetheless, Figure 5 shows that there is a difference in the intensity of synchronization between tetrode signals at the alpha and gamma frequency bands. It appears that even though the alpha band has lower within-cluster coherence, it shows overall higher coherence in many tetrodes further from the diagonal. On the other hand, the gamma frequency band shows clearer coherence intensities; the clusters near the diagonal have high coherence, and clusters far from the diagonal have low coherence.
Coherence analysis can be a useful tool for looking at brain connectivity. However, it comes with the assumption of the stationarity of the time series signals, which can be the case for a short time frame but cannot hold for a longer time, as many real-world signals are non-stationary. Additionally, the coherence of the frequency domain does not indicate the temporal information when the signal pairs are coherent, and coherence analysis can require averaging over time windows (trials) in order to estimate the spectral and cross-spectral power. Consequently, coherence gives a global frequency relationship and is sensitive to noise, as it can inflate or deflate the coherence between signals.
Nonetheless, in Section 6, wavelet analysis, which does not require stationarity for the time series, is discussed as another spectral domain analysis method. The wavelets are less sensitive to noise and can capture multi-scale relationships. Additionally, looking at pairwise analysis can result in redundancies in real-life applications. Therefore, in Section 3, coherence is used to obtain connectivity measures between a cluster of tetrodes from major brain regions.

3. Robust Canonical Coherence

A common approach in network analysis focuses on evaluating pairwise connectivity between channels. However, in many real-world scenarios, assessing dependencies between entire regions rather than individual channels often provides deeper insights. To illustrate this, we plot the LFP signals from 20 tetrodes of Buchanan in Figure 6. The left panel shows the location of these tetrodes that are grouped according to their spatial orientation. The middle panel demonstrates that signals within each region or section exhibit a higher degree of synchrony. We then combine the signals within each region (the right panel). A natural approach to summarizing the dependency between the regions is to consider the correlation between linear combinations of the signals. The method of obtaining the optimal linear combination that maximizes this correlation is referred to as the canonical coherence analysis. We begin with formally defining the canonical coherence.

3.1. Canonical Coherence

Let { Z t } t = 1 T be a d-dimensional weakly stationary time series, where Z t = [ X t , Y t ] , for t = 1 , , T with d = P + Q . Recall the definition of the spectral density matrix (SDM) in Equation (6) and note that the SDM of Z t can be expressed as
S Z Z ( ω ) = S X X ( ω ) S X Y ( ω ) S Y X ( ω ) S Y Y ( ω ) ,
where S X X ( ω ) and S Y Y ( ω ) are the auto-SDM, while S X Y ( ω ) and S Y X ( ω ) are the cross-SDM of X t and Y t such that S X Y ( ω ) = S Y X H ( ω ) , where H denotes the conjugate-transpose operator. Recall the Cramér representation in Equation (3). Canonical coherence analysis at ω -oscillation finds vectors a ω C P and b ω C Q that maximize the coherence (see Equation (8)) between a ω H X ( d ω ) and b ω H Y ( d ω ) . The canonical coherence ϕ ( ω ) as defined in [21] is given by
ϕ ( ω ) = max a ω , b ω a ω H S X Y ( ω ) b ω 2 such that a ω H S X X ( ω ) a ω = b ω H S Y Y ( ω ) b ω = 1 .
This approach, however, is limited to capturing the linear dependence between X t and Y t at a singleton frequency ω . One approach to mitigate this limitation is to consider the canonical band-coherence (CBC) (see [22]) that is defined using band-specific filtered signals introduced in Section 2.2. In the following subsection, we first introduce CBC and then present a robust procedure for estimating CBC for a given frequency band Ω .

3.2. Canonical Band Coherence

First, we recall the definition of a filtered series as given in Equation (7). Let Z t Ω = [ Z 1 , t Ω , , Z d , t Ω ] = [ X 1 , t Ω , , X P , t Ω , Y 1 , t Ω , , Y Q , t Ω ] denote the filtered signals corresponding to the frequency band Ω . Here, Z j , t Ω represents the j-th channel, with j = 1 , , d , and t = 1 , , T . Let Σ Z Ω ( h ) denote the covariance between the filtered signals Z t h Ω and Z t Ω , i.e., Σ Z Ω ( h ) = Cov ( Z t h Ω , Z t Ω ) . Subsequently, we write Σ Z Ω ( h ) as
Σ Z Ω ( h ) = Cov ( X t h Ω , X t Ω ) Cov ( X t h Ω , Y t Ω ) Cov ( Y t h Ω , X t Ω ) Cov ( Y t h Ω , Y t Ω ) = Σ X X Ω ( h ) Σ X Y Ω ( h ) Σ Y X Ω ( h ) Σ Y Y Ω ( h ) .
The authors in [22] defined the CBC between X t h and Y t for a frequency band Ω as the maximum squared correlation between their linear combinations:
ϕ ( Ω ) = max u , v , h Cor u X t h Ω , v Y t Ω 2 = max u , v , h Cov u X t h Ω , v Y t Ω 2 V u X t h Ω V v Y t Ω .
The CBC denoted by ϕ ( Ω ) can be further expressed as
ϕ ( Ω ) = max u , v , h u Σ X Y Ω ( h ) v 2 such that u Σ X X Ω ( 0 ) u = v Σ Y Y Ω ( 0 ) v = 1 .
Let u Ω , v Ω be the vectors and h Ω be the lag that maximize Equation (10). These vectors u Ω and v Ω are called canonical directions, and h Ω is referred to as the canonical lag at frequency band Ω . The magnitude of elements of the canonical directions provides a measure of each channel’s contribution to the CBC between X t h Ω Ω and Y t Ω . Consider the following two matrices:
Θ 1 ( Σ Z Ω , h ) = { Σ X X Ω ( 0 ) } 1 2 Σ X Y Ω ( h ) { Σ Y Y Ω ( 0 ) } 1 Σ Y X Ω ( h ) { Σ X X Ω ( 0 ) } 1 2 , and Θ 2 ( Σ Z Ω , h ) = { Σ Y Y Ω ( 0 ) } 1 2 Σ Y X Ω ( h ) { Σ X X Ω ( 0 ) } 1 Σ X Y Ω ( h ) { Σ Y Y Ω ( 0 ) } 1 2 .
A simple calculation reveals that the solution to the maximization problem in (10) is given by the eigenvalue decomposition of the matrices Θ 1 ( Σ Z Ω , h ) and Θ 2 ( Σ Z Ω , h ) . In fact, the CBC ϕ ( Ω ) is the largest eigenvalue of Θ 1 ( Σ Z Ω , h ) and the canonical directions u Ω and v Ω are given by the leading eigenvectors of Θ 1 ( Σ Z Ω , h ) and Θ 2 ( Σ Z Ω , h ) , respectively (see [22]).

3.3. KenCoh: A Robust Estimator of Canonical Band Coherence

Now, we present a robust estimation method of the CBC. First, note that an estimate of the CBC ϕ ^ ( Ω ) is obtained by considering the eigenvalue decomposition of the matrices Θ 1 ( Σ ^ Z Ω , h ) and Θ 1 ( Σ ^ Z Ω , h ) , where Σ ^ Z Ω ( h ) is estimated from the observed data. In a classical approach, the sample variance–covariance matrix is used to estimate the above matrices. However, the estimator is sensitive to the heavy-tailed properties of the stochastic process and breaks down in the presence of outliers. To circumvent this problem, robust estimators for the covariance matrix can be used, e.g., the minimum covariance determinant estimator [23] and minimum volume ellipsoid estimator [24]. In this article, we present a robust estimator defined in [22] that is based on Kendall’s τ rank correlation coefficient for time series data (also see [25,26]).
We assume that the distribution of Z t Ω is elliptically symmetric with density generator ψ : R + R + , location vector μ Z Ω R d , and the d × d positive definite scale matrix Λ Z Ω for all t 1 [27]. It follows from the properties of an elliptically symmetric distribution that Σ Z Ω ( h ) = 2 ψ ( 0 ) Λ Z Ω ( h ) where Λ Z Ω ( h ) is the cross-scale matrix between Z t h Ω and Z t Ω , and ψ ( 0 ) is the first derivative of the density generator evaluated at zero. We consider Θ i ( Λ Z Ω , h ) similar to Θ i ( Σ Z Ω , h ) , i = 1 , 2 as defined in Equation (11). Observe that
Θ 1 ( Σ Z Ω , h ) = Θ 1 ( Λ Z Ω , h ) , and Θ 2 ( Σ Z Ω , h ) = Θ 2 ( Λ Z Ω , h ) .
We further assume that the diagonal elements of the matrix Λ Z Ω ( 0 ) are all 1, and the off-diagonals of Λ Z Ω ( h ) all lie strictly between −1 and 1 for all h Z . Motivated by Theorem 3.1 of [27], we estimate the j , k -th element of the d × d matrix Λ Z Ω ( h ) as the following:
λ ^ j k Ω ( h ) = sin π 2 τ ^ j k Ω ( h ) , where τ ^ j k Ω ( h ) = 1 T 2 1 t < s T sign { ( Z j , t h Ω Z j , s h Ω ) ( Z k , t Ω Z k , s Ω ) } .
Here, the random variable τ ^ j k Ω ( h ) captures the monotone association between Z j , t h Ω and Z k , t Ω . Since it is based on the signs of the observed signals, it is robust to outliers present in the data. As a result, the estimator Λ ^ Z Ω ( h ) = λ ^ j k Ω ( h ) and subsequently, Θ ^ 1 ( Λ Z Ω , h ) and Θ ^ 2 ( Λ Z Ω , h ) are also free from the influence of outliers. Thus, the estimation method leads to a robust analysis of the canonical band coherence. Since the estimator is based on Kendall’s τ , we refer to this method as KenCoh. The performance of KenCoh is discussed in the following subsection.
Several robust alternatives to the proposed estimator of CBC can be considered in practice. A straightforward approach is to use a robust estimator for the covariance matrix Σ Z Ω ( h ) , such as the minimum covariance determinant (MCD) estimator proposed by [24]. However, MCD has a high computational cost, growing polynomially as O ( T ν ) , where ν = d ( d + 3 ) / 2 [28]. Another approach to robustifying canonical coherence analysis is to replace Pearson’s correlation with a more robust measure of association, such as Spearman’s rank correlation between u X t h Ω and v Y t Ω . While finding the optimal canonical directions remains challenging in practice, a reasonable approximation can be achieved by restricting the search space to a finite set. However, estimating Spearman’s correlation has a computational complexity of O ( T 3 log T ) , which becomes impractical for large samples. In contrast, KenCoh has a significantly lower complexity of O ( T log T ) , making it a more scalable option for large datasets.

3.4. Application to LFP Data

We apply the robust methodology of Section 3.3 to the LFP data collected from Buchanan for the 270 trials. Buchanan was able to identify 232 of them correctly (203 in-sequence trials and 29 out-of-sequence trials). On average, he has a higher rate of responding correctly when the odors were presented in-sequence (i.e., 90% success rate) than when odors were not in-sequence (i.e., 66% success rate). Our goal is to study the spectral association between proximal and distal regions (see Figure 6) and characterize the connectivity structure of the tetrodes in respective regions for in-sequence and out-of-sequence trials. This involves estimating the canonical directions for five odors and two types of trials.

3.4.1. Test of Hypotheses

We consider the beta band ( Ω = ( 12 30 ) Hz) for the real data analysis. Recall that the beta band is known to dominate the signal during task and concentration [29]. Let u Ω ( g , s ) = ( u 1 , Ω ( g , s ) , , u 11 , Ω ( g , s ) ) be the true canonical direction associated with the proximal region for odor g, and s type of trial, where g { A , B , C , D , E } , and s = { I , O } , such that, I stands for in-sequence and O for out-of-sequence trials. Similarly, let v Ω ( g , s ) = ( v 1 , Ω ( g , s ) , , v 9 , Ω ( g , s ) ) denote the true canonical direction associated with the distal region. We are particularly interested in the configuration of different channels that lead to maximum band coherence. To identify which channels’ contribution differ across different odors and types of trials, we conduct tests for the hypotheses
H 0 , s g , k : { u Ω ( g , s ) , v Ω ( g , s ) } = { u Ω ( k , s ) , v Ω ( k , s ) } vs H 1 , s g , k : { u Ω ( g , s ) , v Ω ( g , s ) } { u Ω ( k , s ) , v Ω ( k , s ) }
for all g k { A , B , C , D , E } and s { I , O } .
Let { X t ( r ) } t = 1 T and { Y t ( r ) } t = 1 T denote the observed signals during the r-th trial corresponding to proximal and distal tetrodes, respectively with T = 1000 for all r 1 . Let u ^ Ω ( g , s ) ( r ) and v ^ Ω ( g , s ) ( r ) be the estimated canonical directions associated with the proximal and distal regions, respectively, for trial r, odor g, and s type of trial, where r = 1 , , R g ( s ) . Here, we assume that the trials are independent and identical. Therefore, u ^ Ω ( g , s ) ( r ) and v ^ Ω ( g , s ) ( r ) are independently and identically distributed random vectors for all r = 1 , , R g ( s ) . We further consider the multivariate spatial median [30] based on the random samples { u Ω ( g , s ) ( r ) : r = 1 , , R g ( s ) } , { v Ω ( g , s ) ( r ) : r = 1 , , R g ( s ) } as robust estimators of the canonical directions u Ω ( g , s ) ( r ) and v Ω ( g , s ) ( r ) . We define
U ^ Ω ( g , s ) = arg min μ R + P r = 1 R g ( s ) u ^ Ω ( g , s ) ( r ) μ L 1 and V ^ Ω ( g , s ) = arg min μ R + Q r = 1 R g ( s ) v ^ Ω ( g , s ) ( r ) μ L 1 .
Finally, the test-statistic T Ω ( g , k , s ) is defined as
T Ω ( g , k , s ) = U ^ Ω ( g , s ) U ^ Ω ( k , s ) L 2 P + V ^ Ω ( g , s ) V ^ Ω ( k , s ) L 2 Q ,
for g k { A , B , C , D , E } and s = { I , O } . The null hypothesis H 0 , s g , k is rejected if the observed T Ω ( g , k , s ) is large. The cut-off is obtained by the permutation test where the labels g and k are randomly permuted to derive the distribution of T Ω ( g , k , s ) under the null hypothesis. The p-values in this multiple hypotheses testing framework are adjusted using the Benjamini–Hochberg method [31]. We apply the Synthetic Minority Over-sampling Technique (SMOTE), introduced by Chawla et al. [32], to address data imbalance for pairs of odors g and k with highly imbalanced sample sizes.

3.4.2. Discussion

In Figure 7, we draw a heatmap of the estimates { | U ^ Ω ( g , s ) | , | V ^ Ω ( g , s ) | } for s { I , O } , and g { A , B , C , D , E } . The top panel shows subtle changes in brain connectivity when odors are presented in sequence. In contrast, brain connectivity shows more pronounced changes for different odors when presented out of sequence (bottom panel). For in-sequence, we also observe that common tetrodes drive the connectivity between proximal and distal regions for all odors, e.g., T22, T4, and T5. On the other hand, for out-of-sequence trials, the relative contribution of tetrodes varies across odors. For instance, the contribution of T12 and T17 to odors A and E, respectively, is substantially higher than their contribution to the other odors (see bottom panel, Figure 7). The significance of the changes in the contribution is tested using the permutation test described in Section 3.4.1. The results of the tests are summarized in Figure 8.
We perform pairwise comparisons between odors separately for in-sequence and out-of-sequence trials. Figure 8 provides a graphical representation of our findings. The five odors are depicted as nodes, with edges connecting pairs of odors whose associated brain connectivity structures differ significantly. For instance, the brain connectivity structures associated with odors A and B are not significantly different during in-sequence trials, which explains the absence of an edge between them. However, these structures differ significantly when the odors are presented out of sequence. In contrast, we observe a significant difference in the connectivity structures for odors A and C in both in-sequence and out-of-sequence trials.
Figure 8 reveals that, for in-sequence trials, the odors presented first and last (A and E) play a significant role, as their associated brain connectivity structures show notable differences. In contrast, no significant changes in brain connectivity structures are observed for odors presented in the middle of the in-sequence trials. However, the pattern shifts in out-of-sequence trials, where all odors exhibit distinct brain connectivity structures.

4. Granger Causality Across Node/Region Subsets

Recent brain connectivity analyses frequently involve high-dimensional signals, such as large sets of LFPs or EEG recordings. Dissecting the directional influence between specific nodes (i.e., channels, electrodes, tetrodes) or sub-regions (e.g., pre-defined cluster of nodes) within such high-dimensional and possibly complex networks can serve as a methodological basis for explaining the neural mechanisms at a detailed level. However, analyzing a pair of nodes or sub-regions under the possible confounding effects of other channels in the network leads to inferential complications. Various methods have addressed similar problems in high-dimensional networks [33,34,35,36,37,38], yet the difficulty of isolating a subset of nodes from the rest of the network remains a critical problem [39,40].
This section introduces an approach designed to overcome these challenges using spectral domain dynamic principal component analysis (sDPCA) [21,41,42]. The methodology aims to isolate two nodes (or sub-regions, depending on the context) of interest within a high-dimensional network by removing the aggregate influence of all other nodes, facilitating the subsequent application of conventional examinations for Granger causality (GC). By collapsing the complexity of the entire network into a low-dimensional representation and partialling out the effects of other nodes/subregions, the resulting node-specific signals are isolated from the confounding effects. This provides a practical medium for inferring directional interactions.

4.1. Inference in High-Dimensional Setting

Consider a high-dimensional network, G , of P nodes (i.e., channels, electrodes, and tetrodes), each corresponding to a signal. Let the network, G , be { X p , t , X q , t , ζ 1 , t , , ζ P 2 , t } , where X p = { X p , t } and X q = { X q , t } are two nodes of interest (NOIs) and ζ 1 , t , , ζ P 2 , t represent a large set of other nodes whose influence we wish to control.
We propose combining GC with sDPCA specifically to focus on pairwise interactions among NOI in LFP data. In general, LFP or brain imaging data often involve many channels/nodes measuring neural activity across multiple frequency bands, making direct pairwise GC analysis difficult because of the curse of dimensionality and the risk of overfitting or spurious connections. Applying sDPCA provides an advertent dimensionality reduction step with respect to the spectral structure of the data: sDPCA operates in the frequency domain and accounts for the dominant oscillatory and frequency-specific patterns in the signals. This means that important neural dynamics (such as rhythmic oscillations or cross-frequency interactions) are preserved in a few components rather than averaged out. Following the implementation of sDPCA to extract and regress the network’s interfering background from each node of interest, the exploration of GC between the residual signals becomes feasible. While alternative approaches exist (for example, one could apply GC with sparse regularization to the complete set of channels), the sDPCA+GC methodology is proposed for its ability to maintain frequency-specific information and improve reliability in detecting neural interactions. This balanced and easy-to-implement strategy addresses the complexity of LFP data by focusing on physiologically meaningful components, thus providing a clearer and more interpretable pairwise causal connectivity analysis in a high-dimensional neural recording context.
To isolate the NOI and uncover their causal relationships, one can follow a strategy that involves transforming the signals as follows:
X p * = X p E X p e n c a p s u l a t e ζ ( ζ 1 , t , , ζ P 2 , t ) , X q * = X q E X q e n c a p s u l a t e ζ ( ζ 1 , t , , ζ P 2 , t ) ,
where e n c a p s u l a t e ζ ( · ) is a function summarizing the collective influence of the remaining nodes excluded. After this step, X p * and X q * become isolated versions of the original signals, where the network’s background variability has been partially removed. The key question is how to construct this function e n c a p s u l a t e ζ ( · ) to capture the large network’s dynamics without overfitting or losing crucial frequency-dependent structures.

4.2. Using Spectral Dynamic PCA to Represent the Background Network

Conventional principal component analysis (PCA) focuses on reducing dimensionality by finding linear combinations of variables that explain the most significant variance. However, PCA operates on covariance structures that do not directly incorporate temporal dependency or frequency-specific patterns. Neural signals often have rich spectral content—specific frequencies may carry more meaningful interactions than others. Frequency-domain dynamic PCA (sDPCA) [21,42,43] addresses this need by operating in the frequency domain and extracting components that are informative about temporal associations.
To apply sDPCA, we first estimate the cross-spectral density matrix of the background signals ζ t = ( ζ 1 , t , , ζ P 2 , t ) . Let S ζ ( ω ) be the cross-spectral density matrix at frequency ω . This matrix encodes frequency-specific variances and covariances. It can be estimated by
S ^ ζ ( ω ) = | h | M w | h | M Σ ^ ζ ( h ) exp 2 π i h ω ,
where w ( · ) is a window function, M is the window size, and Σ ^ ζ ( h ) is the empirical lag-h covariance matrix given by
Σ ^ ζ ( h ) = 1 T t = 1 T | h | ζ t + | h | ζ ¯ ζ t ζ ¯ ,
for h 0 , and
Σ ^ ζ ( h ) = Σ ^ ζ ( h ) = 1 T t = 1 T + h ζ t ζ ¯ ζ t h ζ ¯
for h < 0 .
At each frequency ω , we solve the eigenvalue problem:
S ζ ( ω ) φ j ( ω ) = λ j ( ω ) φ j ( ω ) .
The eigenvectors φ j ( ω ) represent frequency-domain principal directions, and λ j ( ω ) are the corresponding eigenvalues. These frequency-specific eigenvectors reflect how variability is arranged throughout the spectral domain.
To return to the time domain, we compute filters from the eigenvectors via inverse Fourier transform:
ϱ m ( j ) = 1 2 π π π φ j ( ω ) e i m ω d ω ,
for integer shifts m. Each set of filters,
{ ϱ m ( j ) : m = L , , L } ,
defines a dynamic principal component in the time domain.
Applying these filters to ζ t yields a reduced set of dynamic principal component scores:
dpc j , t = m = L L { ϱ m ( j ) } ζ t m
Only a few dynamic components are usually necessary to capture a significant fraction of the total variance. These scores represent a low-dimensional snapshot of the entire background network’s activity, integrated over time and frequency.

4.3. Partialling Out Background Influence and Applying Granger Causality

Once we have obtained principal scores { dpc 1 , t , , dpc s c , t } , where s c P 2 , treated as covariates summarizing the background nodes, we model
E X p dpc 1 , t , , dpc s c , t and E X q dpc 1 , t , , dpc s c , t .
Subtracting these fitted values from X p and X q yields X p * and X q * , which are now approximately isolated from the rest of the network’s influence. In practice, these conditional expectations are approximated by regressing each node of interest (e.g., X p ) on the set of dynamic principal component scores { dpc 1 , t , , dpc s c , t } and their interactions. Using a linear model (or a nonlinear model by preference), the fitted values X ^ p and X ^ q are used to approximate E ( X p dpc 1 , t , , dpc s c , t ) and E ( X p dpc 1 , t , , dpc s c , t ) , respectively.
With X p * and X q * in hand, we return to a more conventional Granger causality framework [44]. Testing for GC typically consists of comparing a restricted model that predicts X p * using only its own past against an unrestricted model that also includes X q * ’s past values:
Restricted : X p , t * = γ 0 + i = 1 d X p * γ i X p , ( t i ) * + η t , Unrestricted : X p , t * = α 0 + i = 1 d X p * α i X p , ( t i ) * + j = 1 d X q * β j X q , ( t j ) * + ϵ t .
If adding past values of X q * significantly reduces variability in the predictive errors, we conclude that X q * Granger causes X p * . Similarly, we can test whether X p * Granger causes X q * .
Working with the isolated channels, X p * and X q * , makes the GC results less likely to be distorted by unmodeled interactions from other network nodes. Thus, this approach transforms a high-dimensional problem into a more tractable one, employing the sDPCA spectrum-aware dimension reduction. Figure 9 exemplifies the proposed approach for GC in high-dimensional networks.

4.4. Practical Advantages and Limitations

The proposed methodology avoids the complexity of simultaneously fitting a high-dimensional vector autoregressive (VAR) model to the entire network. Instead, it establishes a controlled setting where the causal interactions between selected nodes can be tested more directly rather than being potentially affected by the influence of the entire network. It does so via bridging the original high-dimensional network and a low-dimensional summary captured by dynamic principal components. By partialling out these low-dimensional summaries, the nodes of interest recover with reduced confounding.
Thus, the sDPCA-based procedure is well suited for scenarios where interest focuses on the directional interaction between a small subset of channels embedded in an extensive network. It can also be adapted when the nodes of interest represent not just single channels but clusters of channels (i.e., regions), each represented by their sDPCA-derived summary scores.
However, several limitations could be noted. First, selecting an appropriate number of principal components (here denoted s c P 2 ) is critical; poor choices can lead to the omission of essential dynamics or the exclusion/inclusion of informative/uninformative components. Second, spectral estimation relies on smoothing parameters and windowing techniques, which may discard sharp spectral features or hidden localized frequency-specific phenomena. Third, conventional tests for GC assume linearity and stationarity, yet neural signals often violate these assumptions, thereby potentially invalidating the inferred causal connections. Finally, while sDPCA reduces the dimensionality in a frequency-aware mode, interpreting the resulting dynamic principal components and linking them to specific bio-physiological processes can remain a significant challenge.

4.5. Granular Level GC in Olfactory LFP Network

We considered trials where the rats correctly identified a given odor’s sequence status (In-sequence—Correct, Table 1). For each trial of interest, segments of LFP signals are extracted starting at the onset of the odor. These segments are then differenced to ensure stationarity and combined across odor types and trials, ultimately yielding a separate collection of LFP samples for each odor. Because each subject’s recording involved multiple tetrodes placed along the proximal to distal axis of CA1, the goal is set to examine how specific tetrode pairs might exhibit directional influences/connectivities (i.e., Granger causality) unique to a particular odor or common across all odors. In practice, we conduct GC analysis for every distal–proximal (i.e., proximal–distal) pair and count how frequently one tetrode “drove” another across the trials for that odor. If a directed association is significant on at least 99% of trials, we label that connection as consistently present. This procedure “votes” on each possible directed relation in the network and flags only those consistently emerging in every trial. Finally, we compile these odor-specific GC connections into plots. The results are visualized as in Figure 10: each row corresponds to one subject, each column to odor, plus the first column shows a connectivity pattern recurring under all odor conditions.
Figure 10 reveals that each subject has certain distal (i.e., blue-colored region) to proximal (i.e., orange-colored region) channels linking up under every odor. Mitt’s distal tetrodes T 1 , T 2 consistently influence proximal channels T 12 , , T 22 . These connections emerge throughout odors A–E, whereas T 1 T 13 or T 2 T 23 occur exclusively in certain odors (e.g., odor A, C, E), implying that these unique routes are distinct circuit elements engaged by particular olfaction. Stella features T 2 T 20 , T 2 T 21 , and T 2 T 23 across all odors, with distinct connectivities such as T 17 T 10 or T 2 T 22 emerging in certain odor, suggesting that while Stella’s distal to proximal drive persists, some odor-specific flows may support specific olfaction demands. Buchanan displays a smaller cluster of common edges than Mitt or Stella, but we still see consistent patterns like T 1 T 16 or T 2 T 21 or T 21 T 6 . Odor-specific differences appear in connections such as T 12 T 4 for odor A/C/D, T 1 T 13 for odor B/E, and T 4 T 23 for odor D/E. While these odor-level connections vary, the overall structure still centers on distal nodes ( T 1 , T 2 , T 4 ), influencing its several proximal tetrodes T 16 , T 21 , T 22 , and T 6 . Barat has relatively a large cluster of distal-to-proximal associations, with T 1 consistently targeting T 12 , , T 19 , and T 2 targeting T 20 , T 21 , T 22 , T 23 . These connections remain steady across odors. A few bidirectional connections (e.g., T 19 T 1 and T 22 T 2 ) also appear. Some extra pathways show up in certain odors (e.g., T 20 T 1 in odor C, T 21 T 4 in odor C, and T 19 T 1 in odor A/E), but the broader pattern remains T 1 , T 2 to T 12 , , T 22 . The subject Superchris stands apart by exhibiting a large cluster of bidirectional connectivity between its distal tetrodes (i.e., T 12 , , T 23 ) and proximal tetrodes (i.e., T 1 , T 2 , …, T 10 ). Unlike Mitt or Stella, who rely more on unidirectional distal → proximal pathways (for example, T 1 T 12 in Mitt or T 2 T 20 in Stella), Superchris shows multiple two-way interactions, such as T 1 T 14 and T 2 T 21 , that persist under all odors. Superchris also exhibits several odor-specific connections involving T 6 , T 7 , T 8 , or T 9 , which emerge only in certain odors, whereas other subjects generally employ fewer links when shifting from one odor to another. Therefore, while the common pattern of distal → proximal influences remains visible, Superchris integrates richer reciprocal activity, suggesting a denser or interconnected CA1 network compared to the predominantly one-directional flows seen in Mitt, Stella, Buchanan, or Barat.
A crucial question here is whether odor-specific connections show a clear shift in CA1 circuitry. Considering Figure 10, most subjects display considerable similarities across “common to all odors” patterns (i.e., distal to proximal). This overlap suggests that hippocampal olfaction processing relies on shared associations of distal to proximal transmissions, with modest variations in direction or presence depending on odor identity. The new connections that do appear exclusively in a single odor could serve a more specialized role.
Consequently, a basic understanding is that each subject’s dorsal CA1 circuit favors a core route of information flow, with distinctive odor-related changes superimposed. In all five subjects, sub-regions along distal CA1 often serve as a “source” area projecting into multiple proximal tetrodes, while back-influences from proximal to distal also emerge in the data. This observation aligns with existing knowledge that CA1 exhibits prominent directionality along its anatomical axis [45,46].

5. Spectral Transfer Entropy

Unlike GC, which relies on model-based assumptions, transfer entropy (TE) is an information-theoretic measure that captures directional and potentially nonlinear dependencies between signals. This makes it particularly valuable for analyzing complex interactions in neural systems.
Consider two signals, denoted by { X q , t } and { X p , t } , observed from distinct nodes, voxels, channels, or tetrodes in a brain network. Let
X q , t k = ( X q , t 1 , , X q , t k ) and X p , t = ( X p , t 1 , , X p , t ) ,
for some time lags k and . Developed by Schreiber [47], TE quantifies the information transfer from { X q , t } to { X p , t } by measuring the conditional mutual information (CMI) between X p , t and X q , t k given X p , t . This is expressed as
T E ( X q X p ; k , ) = I ( X p , t ; X q , t k X p , t ) ,
where I ( · ; · · ) represents the conditional mutual information. This metric reflects the directed influence of X q on X p while accounting for the past states of X p . More precisely,
I ( X p , t , X q , t k X p , t ) = f ( x p , x p , x q ) log f ( x p ) f ( x p , x p , x q ) f ( x p , x p ) f ( x p , x q ) d x p d x q d x p ,
with x p = x p , t , x p = ( x p , t 1 , , x p , t ) and x q = ( x q , t 1 , , x q , t k ) . For a comprehensive discussion on other information-theoretic measures, including entropy and mutual information, refer to Cover and Thomas [48]. In contrast with GC, which looks at the improvement in the prediction variance due to the additional information provided by another series’ history, TE measures the causal impact of a series to another series directly from their joint and conditional distributions. Explicitly, TE quantifies the statistical conditional dependence of X p , t on the past X q , t k given its own history X p , t . This formulation does not require any assumption on the distribution (e.g., Gaussianity) or the type of relationship (e.g., linear) between the two series, hence making the TE framework more general and applicable to analyzing complex data like LFP signals.
When the interest lies in relating effective connectivity to various frequency bands with well-explored cognitive interpretations, one strategy is to apply a bandpass filter on the observed signals to extract the band-specific oscillations of interest and conduct investigations via the GC or TE framework. For example, one may consider the filtered series X q , t Ω and X p , t Ω , where their respective spectral densities concentrate only on frequency band Ω , and calculate TE from X q , t Ω to X p , t Ω . However, the smooth oscillatory behavior of these band-specific oscillations often leads to erroneous results for approaches like GC and TE as linear filtering induces potential temporal dependence distortion and the false extraction of spectral influence [49,50,51]. The problem does not stem from these causal frameworks (i.e., GC and TE) but rather from the direct use of smoothly oscillating filtered signals.
To address this issue, Redondo et al. [52] formulated the spectral transfer entropy (STE) measure. Instead of capturing the direction and magnitude of information flow directly between two band-specific series, STE defines the information transfer between two nodes of a brain network based on a series of maximum amplitudes over non-overlapping time blocks. Let Y q , b Ω = max | X q , t Ω | ; t { t b + 1 , , t b + m } and Y p , b Ω = max ( | X p , t Ω | ; t { t b + 1 , , t b + m } ) where t b is the time point preceding the b-th time block of length m. Concisely, { Y q , b Ω } and { Y p , b Ω } represent the block maxima series of amplitudes of the oscillations X q , t Ω and X p , t Ω , respectively. Specifically, STE from X q , t Ω to X p , t Ω , denoted by S T E Ω ( X q X p ; k , ) , is defined as
S T E Ω ( X q X p ; k , ) = T E ( Y q Ω Y p Ω ; k , ) ,
and is shown to be robust (empirically) to the inherent issues associated with linear filtering, i.e., it adequately captures spectral causal influences with controlled false positive rates, which provides evidence of the practical advantages of such a formulation.
Aggregating band-specific signals into a series of maximum amplitudes over time blocks takes inspiration from communication theory, where the information transfer between two devices occurs through signal modulation. That is, an Ω -band oscillation X t Ω can be expressed as a product of a carrier signal φ t Ω , whose spectral density is concentrated in the frequency band Ω and serves as the information pathway for the flow of information, and a modulating signal A t Ω that carry the information being transferred from one node to another (see Figure 11 for illustration). However, there is a shift in the temporal resolution of causality defined by the STE measure. For instance, if the signals are observed at a sampling rate of 1000 Hz (i.e., at 1000 time points per second) and the specified block size is m = 100 , STE quantifies the causal interactions that occur in about every one-tenth of a second (which is slower than the original temporal scale of the observed data). For more details on the interpretation, choice of tuning parameters, and vine copula-based inference for STE, which we employ in the subsequent analysis, refer to [52].
Let X q , t and X p , t be the LFP signals from tetrode q in the distal region and tetrode p in the proximal region, respectively. In this section, our primary objective is to identify differences in effective connectivity between tetrodes placed in the distal and proximal regions during correct and incorrect trials of mice given the olfactory-based task. Correct trials refer to trials where a mouse received a reward for recognizing an in-sequence odor, while incorrect trials result in having no reward. Here, we consider in-sequence trials from two subjects (namely Superchris and Mitt) given the odors rum and lemon, respectively. Since the number of incorrect trials is much smaller than that of the correct trials, we randomly sample an equal number of correct trials to make balanced cases. To be exact, we include 9 trials each from Superchris and 13 trials each from Mitt in the analysis. However, each trial contains roughly 1.2 s of LFP recordings after the odor presentation. Since the STE framework requires aggregating data over non-overlapping time blocks of length m = 100 (which we specify to define a practical temporal resolution of causality), we use all block maxima series obtained from all correct trials and from all incorrect trials to calculate the STE measure. Finally, we focus our attention on two frequency bands, namely the alpha and beta bands, as several spectral analyses on individual LFP signals reveal changes in the latter related to olfactory functions [53,54,55], while differences in the former are yet to be discovered. Our goal is to provide insights on the causal influence of these band-specific oscillations on one another via the distributions of S T E α ( X q X p ) and S T E β ( X q X p ) across relevant { q , p } -tetrode pairs, complimenting the existing results based on univariate spectral methods.
In Figure 12, we observe a high magnitude of information transfer, as quantified by the STE measure, between the distal and proximal regions during correct in-sequence trials while lower STE values during incorrect in-sequence trials. In addition, the differences in the magnitude of captured causal influence between correct and incorrect trials are highly prominent for subject Superchris while being less yet arguably still prominent for subject Mitt. This suggests that the STE approach is able to reveal prominent differences in connectivity patterns among the multivariate LFP signals in the alpha band, even though there is limited work on univariate spectral density methods that detect any differences in the same frequency band. By contrast, the flow of information in the beta band from the distal to the proximal region of Superchris has higher magnitudes during incorrect trials than during correct trials, while the STE values from proximal to distal region have larger variability among correct trials than among incorrect trials (see Figure 13). Further, there are very minimal differences in the beta band for subject Mitt. Such inconsistency in differences may be related to the odors presented to the respective subjects, as one odor may have a stronger or weaker impact on the subjects than the other. Nonetheless, the STE framework is a promising new tool for investigating effective brain connectivity, which we illustrate as successful in providing insights into how node interactions in the frequency domain may vary among different brain networks.

6. Wavelet Coherence Analysis

A key challenge in analyzing brain signals, such as LFPs, is their inherent non-stationarity; that is, statistical properties like the spectrum (or covariance) evolve over time. Wavelet analysis has proven exceptionally useful in capturing the transient features of non-stationary signals due to the compact support and flexibility of wavelet functions [56,57]. The compact support of wavelets allows for dynamic scaling—through compression or stretching as illustrated in Figure 14—which enables them to adapt to changing signal characteristics. In contrast, traditional Fourier methods, which lack time localization and the ability to adapt to a signal’s dynamic behavior, often struggle to capture these transient properties.
To address these limitations, Nason et al. [58] introduced a scale-specific stochastic representation of time series that leverages the multi-resolution property of wavelets to estimate evolving wavelet coherence. Building on this foundation, Park et al. [59] extended the framework to multivariate locally stationary wavelet (LSW) processes, enabling precise characterization of single-scale coherence among different channels. More recently, Wu et al. [60] proposed an innovative modeling framework that effectively captures the cross-scale dependence structure between channels in multivariate non-stationary time series. This advancement further enhances the capability of wavelet-based methods to uncover complex dependencies and evolving connectivity patterns in neural signals.
In this section, we introduce the framework of LSW and demonstrate its application to analyzing LFP data from different brain regions. We implement both single- and cross-scale coherence to capture the time-varying dependence structure across regions. This approach also allows us to examine how fluctuations in longer-term dynamics influence the amplitude of shorter-term dynamics, providing deeper insights into the multi-scale interactions within the brain.

6.1. LSW Model and Wavelet Coherence

Wavelets are powerful mathematical tools that enable the decomposition of signals into components containing both time and frequency (or scale) information. Unlike the Fourier transform, which represents signals as combinations of infinite sinusoids and provides only global frequency information, wavelets are uniquely suited for analyzing localized variations in signals. This makes wavelets particularly valuable for studying non-stationary data where signal characteristics may vary over time.
Wavelet analysis is built on two foundational functions: the father wavelet  ϕ and the mother wavelet  ψ . The father wavelet ϕ is designed to capture smooth, low-frequency components of a signal and integrate them into one, ensuring a focus on the overall trend. In contrast, the mother wavelet ψ integrates to zero and is responsible for extracting detailed, high-frequency components, thereby highlighting localized variations in the signal. To analyze signals at different resolutions, the mother wavelet is compressed and shifted to generate a family of child wavelets. These wavelets, indexed by a scale parameter j and a shift parameter k, are defined as
ψ j , k ( t ) = 2 j / 2 ψ t 2 j k 2 j , j = 1 , , J
where J denotes the maximum number of scales, and the scale j determines the resolution or level of detail captured by the wavelet, with smaller j values corresponding to coarser scales and larger j values corresponding to finer scales. The parameter j determines the translation of the wavelet in time, allowing for localized analysis across the signal.
Nason et al. [58] introduced the LSW framework, a novel representation for stochastic processes exhibiting complex, time-varying dynamics. Unlike traditional wavelet decomposition, which typically relies on decimated and orthogonal wavelet bases, the LSW framework employs non-decimated wavelet bases. This means that the wavelet bases in LSW are non-orthogonal across different scales and shifts, allowing for a more flexible representation of signals with intricate temporal structures. This unique feature makes LSW particularly well suited for analyzing non-stationary processes where traditional methods may fall short. Park et al. [59] extended the LSW to a multivariate setting to capture the time-varying scale-specific cross-dependence between the components from the signals among different channels. Here, we directly start by introducing this multivariate LSW (MvLSW).
The P-variate locally stationary wavelet process { X ( t ) } t = 1 T , defined by [58], where T = 2 J , J N can be represented by,
X ( t ) = j = 1 k V j ( k / T ) ψ j , t k z j , k ,
where ψ j , t k j k is a set of discrete non-decimated wavelets; V j ( k / T ) is the time-dependent transfer function matrix. z j , k are uncorrelated random vectors with mean vector 0 and variance–covariance matrix equal to the P × P identity matrix. Furthermore, the scale-j subprocess of X t ; T is defined as
X j ( t ) = k V j ( k / T ) ψ j , t k z j , k .
The evolutionary spectrum matrix is defined based on the transfer function, which is used for quantifying the time-scale power of X t ; T . This local wavelet spectral (LWS) matrix is given by
S j ( k / T ) = V j ( k / T ) V j ( k / T )
where V j ( k / T ) denotes the transpose of V j ( k / T ) . The random innovation term z j , k is assumed to be uncorrelated across different scales j and shifts k:
Cov z j , k ( i ) , z j , k ( i ) = δ i , i δ j , j δ k , k ,
where δ denotes the Kronecker delta function; however, one of the main limitations of this framework is its inability to capture cross-dependence between subprocesses at different scales, which can be an important measure of dependence.
To address this limitation, ref. [60] relaxed this assumption by defining the covariance matrix of z j , k using a general matrix Q j j ( k / T ) . The dual-scale LWS matrix is then formulated as
S j j ( k / T ) = V j ( k / T ) Q j j ( k / T ) V j ( k / T ) .
Based on the single- and cross-scale LWS matrix, the time-varying wavelet coherence is defined as
ρ j ( k / T ) = D j ( k / T ) S j ( k / T ) D j ( k / T ) , single - scale ρ j j ( k / T ) = D j ( k / T ) S j j ( k / T ) D j ( k / T ) , cross - scale
where the matrices D j ( u ) and D j ( k / T ) are diagonal with elements S j ( p , p ) ( k / T ) ( 1 / 2 ) and S j ( q , q ) ( k / T ) ( 1 / 2 ) , respectively (see details in [60]). The wavelet coherence values range from 1 to 1, measuring the local single- or cross-scale dependence structure between channel p and channel q in multivariate time series. This framework offers a powerful way to study the time-varying connectivity between different brain regions, enabling insights into dynamic neural interactions. An empirical way to calculate localized measures of cross-scale dependence in the time domain is
ρ j j ( p , q ) ( t / T ) = | Cov ( X j ( p ) ( t ) , X j ( q ) ( t ) ) Var ( X j ( p ) ( t ) ) Var ( X j ( q ) ( t ) ) | 2 .
It is easy to see that if j = j , then ρ j j = ρ j , i.e., the cross-scale coherence becomes equal to the single-scale coherence.

6.2. Wavelet Coherence Analysis with LFP Data

In this part, we implement both single-scale and cross-scale wavelet coherence across different channels of the LFP data recorded from Superchris. The primary objective is to determine whether wavelet coherence can effectively reveal alterations in brain connectivity when the rat makes mistakes in odor sequence discrimination.
The initial step involves decomposing the LFP time series into subprocesses at each scale. Here, we set the decomposition level to J, which can be adjusted based on the desired resolution of the analysis. Figure 15 presents the wavelet coherence between subprocesses at the same scale across eight channels recorded from Superchris, averaged over 24 trials of correct and incorrect responses, respectively. The heatmaps for the two groups of trials largely exhibit similar values across most blocks. Additionally, Figure 16 illustrates several pairs of wavelet coherence between subprocesses at different scales during trials where the rat responded correctly to the stimulus. The heatmap matrices are not symmetric as single-scale cases because of the changes in scales.
To investigate whether there are changes in brain connectivity when the rat makes mistakes compared to correct responses, we conducted a permutation test with 1000 replicates. This test aimed to identify significant differences in the average wavelet coherence between trials with correct and incorrect responses. Figure 17 presents the p-value results across channels and specific pairs of subprocesses, corresponding to the cross-scale coherence shown in Figure 16.
Based on the results of the permutation test, we selected several pairs of scale-specific subprocesses from different channels that correspond to significant p-values in Figure 17. This selection aimed to verify whether the differences between correct and incorrect trials are clearly observable. Figure 18 demonstrates that the cross-scale coherence between these selected subprocesses shows significant differences across the two types of trials. Our framework effectively captures the time-evolving coherence, revealing that, in most cases, the coherence during incorrect trials is substantially higher than that observed in correct trials.
The analysis reveals numerous alterations in brain connectivity during incorrect responses, even in cross-scale interactions between different regions. These findings highlight the effectiveness of wavelet coherence as a powerful tool for capturing critical dynamics in brain activity.

7. Topological Data Analysis

As described in the previous sections, numerous methods have been proposed to estimate brain connectivity, spanning from correlation- and coherence-based measures to Granger causality, transfer entropy, and wavelet-based approaches for non-stationary data. The subsequent step typically involves performing a brain network analysis under different scenarios.
Brain network analysis has emerged as a vital area of research for understanding neural connectivity and its role in cognitive and physiological processes [5]. Over the past few decades, this field has been shaped by foundational studies in network science, such as the concepts of small-world networks [61] and scale-free networks [62], which highlighted the key organizational principles of brain networks. These studies have motivated the application of graph-theoretic approaches to analyze brain connectivity, providing valuable insights into the structural and functional organization of the brain [63,64].
Graph-theoretic measures, such as the clustering coefficient and modularity, have been extensively used in brain network analysis. The clustering coefficient quantifies the extent to which a node is connected to or influences its neighbors, providing insights into how different brain regions collaborate to process information. Modularity, in contrast, measures the extent to which a network can be divided into distinct communities or modules with dense intra-community connections and sparse inter-community links, offering a deeper understanding of functional segregation in the brain. Physiologically, these measures have been linked to cognitive processes such as information integration and function segregation [65,66].
Despite their utility, graph-theoretic approaches have notable limitations. One major challenge is the thresholding problem, where the process of binarizing or sparsifying (creating an adjacency matrix from edge weights) connectivity matrices can significantly affect results, introducing subjectivity and potential bias [67,68]. Additionally, measures like clustering coefficient and modularity are summaries of the graph and may overlook more intricate, multi-scale interactions beyond pairwise relationships within the brain network. This limitation has prompted the exploration of alternative methods that can capture the richer and more detailed features of connectivity.
Topological data analysis (TDA) methods have gained significant momentum in recent years, especially in the analysis of brain signals due to its ability to characterize the shape and structure of multivariate time series data across multiple scales [69,70]. One of the key tools in TDA, persistent homology (PH), has proven particularly powerful for understanding the topological structure of data. For example, Lee et al. [71] was among the first to introduce PH to brain network analysis, comparing functional networks across groups of children with ADHD, ASD, and typical development. Similarly, Wang et al. [72] applied PH to event-related potentials, successfully detecting differences between post-stroke aphasic individuals and healthy controls under conditions of altered auditory feedback. Additionally, Saggar et al. [73] demonstrated the utility of the Mapper algorithm for reducing the dimensionality of connectivity graphs, thereby facilitating the analysis of dynamic brain networks and task-related effects.
PH analyzes the evolution of topological features, such as connected components (clusters), loops (cycles), and higher-dimensional voids, across a scale parameter. As illustrated in Figure 19, PH constructs a filtration, that is, a nested sequence of simplicial complexes that extend the notion of networks beyond pairwise interactions. By tracking the “birth” and “death” of these features, PH reveals the scales at which significant topological structures emerge, offering a nuanced view of neural connectivity. The figure demonstrates this process with two examples: the top row represents a dataset with two distinct clusters, while the bottom row illustrates a dataset with a single prominent cycle. As the parameter ϵ increases (illustrated by growing balls around the data points in the four rightmost columns), the filtration encodes the topology at different scales. For the clusters, the features persist until they merge, while for the cycle, the loop appears at a certain scale and disappears at another. PH typically summarizes this information using visual tools such as barcodes [74], persistence landscapes [75], persistence images [76], or persistence diagrams as illustrated in the left column of Figure 19, where for each dimension, the birth–death pairs ( b , d ) of topological features are plotted as points in the ( x , y ) -coordinate system, with different dimensions represented by distinct colors. For instance, connected components ( H 0 ) are shown as points on the y-axis ( b = 0 ), while cycles ( H 1 ) are points in the upper triangle ( b < d ) with points farther away from the diagonal, indicating a longer persistence.
By tracking these features and representing them in diagrams, PH offers a deeper understanding of the underlying data structure. In the context of brain networks, it reveals connectivity patterns that extend beyond traditional graph-theoretic measures, providing a robust framework for studying neural dynamics and organization. For time series data, the Vietoris–Rips filtration can be constructed in various ways [70].
To analyze the connectivity patterns in the rat’s LFP data, we focus on in-sequence trials (A, B, C, D, E) where the rat made correct decisions recognizing the odor, resulting in a total of 190 trials. For each trial, we estimate coherence matrices across different frequency bands and construct the corresponding persistence diagrams (PDs).
Figure 20 illustrates the results for two selected trials (Trial 1 for odor A and Trial 100 for odor B) at two frequency bands (0–12 Hz and 12–30 Hz). The first row displays the coherence matrices for each trial and frequency band, offering insights into the pairwise interactions between regions. The second row showcases the associated persistence diagrams, which summarize the birth and death times of the 0D and 1D topological features, providing a compact representation of the topological structure inherent in the coherence matrices.
All four coherence matrices appear to display similar information, with three main clusters visible across the trials. Within the rat hippocampus, the distal (first half of the tetrodes) and proximal (second half of the tetrodes) regions exhibit distinct patterns: two clusters are evident in the distal region, while only a single cluster is apparent in the proximal region. However, the visually distinguishing differences between the coherence matrices across trials and frequency bands remains challenging. In contrast, the PDs provide more precise information on the birth and death times of topological features, clearly highlighting differences between the trials that are not easily observable in the coherence matrices.
A persistence diagram D k for a dimension k is a multiset of birth–death pairs:
D k = ( b i , d i ) b i , d i R ¯ , b i < d i ,
where each ( b i , d i ) encodes the appearance ( b i ) and disappearance ( d i ) scales of a topological feature of dimension k.
This approach enables a rigorous comparison of the topological structures across trials, offering insights that surpass traditional graph-theoretic measures. By estimating dependence for each trial and applying PH, we assess the topological features in connectivity graphs. Using the Wasserstein distance (see Equation (17)), we quantify changes in connected components (dimension 0) and cycles (dimension 1) across trials, uncovering subtle differences in neural dynamics. This approach enables a rigorous comparison of the topological structures across trials, offering insights that surpass traditional graph-theoretic measures. By estimating dependence for each trial and applying PH, we assess the topological features in connectivity graphs. Using the Wasserstein distance (see Equation (17)), we quantify changes in connected components (dimension 0) and cycles (dimension 1) across trials, uncovering subtle differences in the neural dynamics:
d W ( D 1 , D 2 ) = min Γ ( x , y ) Γ x y 2 1 / 2 ,
where Γ ranges over all bijective matchings between points in D 1 and points in D 2 (possibly adding diagonal points if needed).
To analyze the partial correlation dependence, we compute the Wasserstein distances between persistence diagrams across trials. The results are presented in Figure 21, where the top row shows the full distance matrices for dimensions 0 (left) and 1 (right), and the bottom row summarizes averages grouped by odors (lemon, anise, rum, vanilla, and banana). For dimension 0, the variability in connected components is notably smaller for trials corresponding to the first odor (lemon) compared to the others. Conversely, for dimension 1, the second odor (anise) shows the lowest variability, suggesting distinct patterns in topological features depending on the odor and dimension.
The coherence-based analysis highlights frequency-specific topological patterns across trials and odors. Figure 22 focuses on dimension 0, showing that the lemon odor exhibits lower variability in connected components within the delta and theta bands but not in the beta band. This indicates that distinct frequency bands capture different aspects of connectivity. Meanwhile, Figure 23 explores cycles (dimension 1) across the same frequency bands, showing relatively consistent patterns with no strong odor-specific differences.
By combining these results, we highlight the value of PH in identifying nuanced patterns in brain connectivity networks. The coherence-based analysis underscores the ability to pinpoint frequency bands where significant topological changes occur, while the partial correlation analysis reveals odor-specific differences in neural connectivity. Together, these findings demonstrate the potential of persistent homology to provide a deeper understanding of the organization and dynamics of complex neural systems.
In summary, one of the key strengths of TDA lies in its ability to move beyond pairwise interactions and capture higher-order structures in complex networks. While many traditional graph-based tools focus on edges between pairs of nodes, TDA leverages simplicial complexes to incorporate multi-node (higher-order) dependencies. In parallel, new frameworks have emerged to integrate higher-order interactions into measures of dependence such as transfer entropy [77], and recent investigations of brain connectivity underscore the importance of these high-order relationships [78,79]. Advanced TDA approaches like Hodge decomposition have extended standard methodologies (e.g., persistent homology) from handling only symmetric connectivity to accommodating non-symmetric dependency measures [80]. This expansion enables TDA to capture global topological patterns (gradient, local, and global loops) when the underlying networks arise from effective connectivity. The ability to incorporate such higher-order interactions in non-symmetric, directed settings highlights a promising avenue for future research.

8. Discussion

As fundamental measures of linear association, correlation and coherence have been extensively utilized to assess functional connectivity in neuroscience research. Coherence, in particular, offers a more nuanced analysis when dependence is driven by specific oscillations, forming the foundational elements of most brain connectivity analyses.
Building upon these principles, KenCoh has been developed to address some of the limitations inherent in coherence. Specifically, it enhances the ability to discern more complex patterns of connectivity that are not readily apparent with traditional coherence measures. Moreover, it provides region-to-region analyses that align with the spatial orientation of most brain imaging data. In Section 3, we apply KenCoh to LFP data to investigate Buchanan’s brain connectivity during in-sequence and out-of-sequence trials. The results indicate that the same key tetrodes contribute more to global coherence in the beta band during in-sequence trials, contrasting with the findings from out-of-sequence trials. This suggests a potential role in the pattern recognition skills of rats.
Granger causality analysis is designed to capture directional interactions between brain regions, offering deeper insights into effective connectivity. Unlike functional connectivity which reflects statistical associations without implying causation, GC indicates the direction of information flow between regions. This directional information is especially valuable for studying specific pathways, such as those involved in sensory integration or memory formation, without needing to map the entire connectivity structure. Moreover, when combined with sDPCA, GC preserves the dominant oscillatory activity in the broader network before focusing on pairwise interactions, thereby enhancing the reliability of inferred direct influences.
Pairwise GC analysis of the LFP recordings indicate a dominant flow from distal to proximal CA1 across odors, although certain subjects show additional reverse or two-way interactions. One subject exhibits extensive reciprocal connectivity, contrasting the more unilateral patterns observed in others. These findings suggest that hippocampal olfactory processing depends on a shared distal-to-proximal route, with odor-specific link changes superimposed, reflecting subtle variations in how each subject’s CA1 circuitry responds to different odor conditions.
One major advantage of the STE method in analyzing LFP data is that it enables the capturing of nonlinear (possibly cross-frequency) information transfer between nodes in a brain network, with minimal assumptions on the distribution or type of relationship between the signals. That is, it allows for quantifying effective brain connectivity that concentrates on specific frequency bands, which makes it straightforward to link results to well-established findings in cognitive neuroscience. Also, its application to understanding effective brain connectivity is not limited only to LFP data but to other brain imaging modalities such as EEG and functional near infrared spectroscopy (fNIRS). Moreover, its estimation is simple and computationally efficient, as it employs a vine copula approach as illustrated in [52]. Since STE is defined over maximum amplitudes of non-overlapping time blocks, it is fairly robust to spontaneous noise artifacts which may primarily take effect at high-frequency oscillations. These advantages enable us to identify major differences in the magnitude of information flow between the distal and proximal regions of the subjects during in-sequence and out-of-sequence trials in the alpha and beta frequency bands.
A caveat, however, is that the STE approach assumes the stationarity of signals because it requires the extraction of band-specific oscillations through bandpass filtering (e.g., Butterworth filter). The stationarity assumption ensures the extracted signals appropriately capture the oscillations of interest. In our LFP analysis, this is not an issue since the 1.2 s segments we analyze exhibit quasi-stationary behavior. In addition, the temporal resolution of causality captured by STE is relatively slower than the actual sampling rate of the signal due to the aggregation over time blocks. Depending on the choice of block size m, the causal interpretations for the connections measured by STE change. Thus, practical considerations, aligning with the goals of the study, should be made before implementation to achieve its best performance.
Often brain signals are assumed to be stationary. However, this is not the case in many practical scenarios. Wavelet coherence addresses these challenges effectively by analyzing non-stationary time series and capturing time-varying statistical properties within these signals. The application of wavelet coherence analysis to LFP data helps us identify intriguing interactions between components at different scales across various channels. Furthermore, we observe distinct differences among channels when the rat makes mistakes compared to correct responses, providing valuable insights into the neural dynamics.
Persistent homology provides a robust framework for analyzing brain connectivity in LFP recordings by quantitatively assessing the shape and structure of high-dimensional brain networks. This capability is particularly valuable for investigating how different tasks or conditions affect brain organization and for comparing individuals with varying neurological disorders. Our analysis demonstrates that TDA can reveal subtle, yet meaningful, variations in neural connectivity that conventional methods often overlook. Specifically, persistence diagrams derived from both coherence and partial correlation matrices highlight the variations that are both odor specific and frequency specific. For instance, in the partial correlation analysis, the persistence diagrams show that trials associated with the lemon odor exhibit reduced variability in connected components (dimension 0), suggesting a more stable clustering of neural activity. In contrast, trials linked to the anise odor display lower variability in cyclic features (dimension 1), indicating more consistent loop structures. Similarly, the coherence-based analysis reveal that low-frequency bands (delta and theta) capture more stable connectivity patterns for the lemon odor compared to the beta band.

8.1. Advantages and Limitations

The presented methods present distinct strengths and limitations in capturing the various aspects of neural connectivity. To summarize these findings and aid in method selection, Table 2 presents a comprehensive comparison of the advantages and limitations of each approach, addressing the specific challenges inherent in neural data analysis.

8.2. Future Directions

The theoretical properties of KenCoh remain an open question and warrant further investigation. In particular, it would be interesting to examine the performance of KenCoh when the group sizes of variables become large, i.e., as P and Q tend to infinity. A natural approach to addressing the high dimensionality of the problem is to regularize the canonical directions u and v . For instance, one could impose additional constraints, such as | u | L 1 = | v | L 1 = 1 , to obtain a sparse solution while solving the maximization problem in (10).
In the relatively novel field of neural network-based Granger causality (NN-GC), we see an extension of traditional causality concepts. Conventional approaches assume a linear dependence structure in data or provide hand-selected basis functions or kernel transformations which requires domain knowledge and expertise. NN-GC approaches leverage the function approximation power of neural networks to model complex, nonlinear interactions that are not easily captured by standard statistical methods. These methods relax the linearity assumption of standard methods and learns data-driven features in an end-to-end manner through error backpropagation. Several methods based on NN have been recently proposed based on sparse regression. We refer the interested reader to [81,82,83,84] for more details.
Deep learning-based approaches to GC discovery from observational time series data have considerable potential due to the neural network models’ ability to learn task-specific, data-driven representations. Although sparse regression-based techniques have been proposed in the literature [81,82,83,84], these methods do not provide uncertainty quantification in their estimates. In addition, developing efficient Auto-ML and sensitivity analysis techniques to optimize hyperparameters (e.g., regularization, sparsity, and optimizer settings) and to reduce computational cost, as well as exploring time-varying and multi-scale GC analyses (where neuronal states can switch between connectivity patterns over different frequency bands or behavioral conditions), represent further directions to enhance these methods.
In its current formulation, STE addresses effective connectivity between nodes in a brain network in a pairwise manner. That is, it does not account for how other parts of the network, say signals from a third node, affect the strength of information transfer between the pair of nodes being investigated. Thus, one interest extension is to develop a new metric based on causation entropy, which is another information-theoretic measure, that captures the magnitude and direction of information flow between two variables after taking into account the contributions of other variables in the system.
The definition of wavelet coherence can be extended to address more complex scenarios, such as locally stationary partial coherence in the presence of high-dimensional confounders. This extension would further capitalize on the time–frequency localization capabilities of wavelets, offering improved sensitivity in detecting dynamic connectivity patterns.
While most TDA techniques focus on functional connectivity, Hodge decomposition, a method rooted in algebraic topology offers a complementary perspective on effective connectivity. By decomposing brain connectivity into gradient, curl, and harmonic components, this approach can reveal subtle dynamics in the flow of information that are often disrupted in neurological disorders [80]. Furthermore, integrating this decomposition with machine learning techniques holds promise for detecting abnormal connectivity patterns associated with specific conditions, potentially paving the way for improved diagnostics and targeted therapies (e.g., in epilepsy).

9. Conclusions

In this paper, we analyzed brain connectivity data from the hippocampal region of rats using a diverse set of methods. Our approach spanned traditional techniques such as correlation, partial correlation, and coherence—and advanced methods including Granger causality, robust canonical coherence, spectral transfer entropy, wavelet coherence, and persistent homology. By comparing these techniques, we provided a detailed examination of their strengths, limitations, and their applicability to uncovering the complex interactions within neural systems.
Our findings demonstrate that classical methods serve as a reliable foundation for capturing linear and stationary relationships, while advanced techniques are better suited to capture nonlinear, dynamic, multi-scale and higher-order interactions.
The application of these methods to hippocampal LFP data revealed nuanced, odor-specific, and frequency-specific patterns in connectivity, which underscore the complex organization of neural circuits underlying nonspatial olfactory processing. Despite these promising results, several challenges remain, including the need for careful parameter tuning, computational efficiency, and improved interpretability of some of these advanced techniques.
Integrating these diverse methods into unified frameworks that leverage their complementary strengths could offer even deeper insights into brain connectivity. Moreover, the development of scalable algorithms and user-friendly software tools is essential for translating these advanced techniques into practical applications for neuroscience research.
By presenting a comprehensive suite of methods and applying them to hippocampal LFP data, this study aims to pave the way for further exploration and innovation in brain connectivity analysis.

Author Contributions

Conceptualization: A.B.E.-Y. and H.O.; Formal analysis: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.T., F.T.T. and H.W.; Funding acquisition: H.O.; Investigation: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Methodology: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Project administration: A.B.E.-Y. and H.O.; Resources: K.W.C., N.J.F. and H.O.; Software: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Supervision: A.B.E.-Y. and H.O.; Validation: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Visualization: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Writing—original draft: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W.; Writing—review and editing: A.B.E.-Y., S.A., F.G., P.V.R., S.R., M.S.S., M.S.T., F.T.T. and H.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the King Abdullah University of Science and Technology (KAUST).

Institutional Review Board Statement

This research involves the use of data previously collected in another study. Our study did not require new Institutional Review Board (IRB) approval, as it used existing, which complies with the ethical standards for secondary data analysis. Researchers interested in further details about the data collection ethics and approvals should refer to the original study documentation or contact the original study team.

Data Availability Statement

The data is not available publicly. Researchers interested in accessing the data can contact Norbert J. Fortin at norbert.fortin@uci.edu.

Acknowledgments

The authors gratefully acknowledge the support and resources provided by the King Abdullah University of Science and Technology (KAUST).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
EEGElectroencephalogram
ECoGElectrocorticogram
LFPLocal Field Potentials
fMRIfunctional Magnetic Resonance Imaging
CBCCanonical Band-Coherence
MCDMinimum Covariance Determinant
SMOTESynthetic Minority Over-sampling Technique
PCAPrincipal Component Analysis
sDPCASpectral Dynamic Principal Component Analysis
VARVector Autoregressive
GCGranger Causality
NNNeural Networks
NOINode of Interest
TETransfer Entropy
STESpectral Transfer Entropy
LSWLocally Stationary Wavelet
MvLSWMultivariate Locally Stationary Wavelet
TDATopological Data Analysis
PHPersistence Homology
PDPersistence Diagram
ADHDAttention Deficit Hyperactivity Disorder
ASDAutism Spectrum Disorder

References

  1. Sultana, O.F.; Bandaru, M.; Islam, M.A.; Reddy, P.H. Unraveling the complexity of human brain: Structure, function in healthy and disease states. Ageing Res. Rev. 2024, 100, 102414. [Google Scholar] [CrossRef]
  2. van Bree, S. A Critical Perspective on Neural Mechanisms in Cognitive Neuroscience: Towards Unification. Perspect. Psychol. Sci. 2024, 19, 993–1010. [Google Scholar] [CrossRef]
  3. Yen, C.; Lin, C.L.; Chiang, M.C. Exploring the Frontiers of Neuroimaging: A Review of Recent Advances in Understanding Brain Functioning and Disorders. Life 2023, 13, 1472. [Google Scholar] [CrossRef]
  4. Sakkalis, V. Review of advanced techniques for the estimation of brain connectivity measured with EEG/MEG. Comput. Biol. Med. 2011, 41, 1110–1117. [Google Scholar] [CrossRef]
  5. Simpson, S.L.; Shappell, H.M.; Bahrami, M. Statistical Brain Network Analysis. Annu. Rev. Stat. Its Appl. 2024, 11, 505–531. [Google Scholar] [CrossRef]
  6. Friston, K.J. Functional and effective connectivity: A review. Brain Connect. 2011, 1, 13–36. [Google Scholar] [CrossRef]
  7. Rowe, J.B. Connectivity Analysis is Essential to Understand Neurological Disorders. Front. Syst. Neurosci. 2010, 4, 144. [Google Scholar] [CrossRef]
  8. Stam, C.J. Modern Network Science of Neurological Disorders. Nat. Rev. Neurosci. 2014, 15, 683–695. [Google Scholar] [CrossRef] [PubMed]
  9. Fornito, A.; Zalesky, A.; Breakspear, M. The Connectomics of Brain Disorders. Nat. Rev. Neurosci. 2015, 16, 159–172. [Google Scholar] [CrossRef]
  10. Teyler, T.J.; DiScenna, P. The Hippocampal Memory Indexing Theory. Behav. Neurosci. 1986, 100, 147–154. [Google Scholar] [CrossRef]
  11. Fortin, N.J.; Agster, K.L.; Eichenbaum, H.B. Critical Role of the Hippocampus in Memory for Sequences of Events. Nat. Neurosci. 2002, 5, 458–462. [Google Scholar] [CrossRef]
  12. Eichenbaum, H. The role of the hippocampus in navigation is memory. J. Neurophysiol. 2017, 117, 1785–1796. [Google Scholar] [CrossRef]
  13. Clark, R.E.; Squire, L.R. Similarity in form and function of the hippocampus in rodents, monkeys, and humans. Proc. Natl. Acad. Sci. USA 2013, 110, 10365–10370. [Google Scholar] [CrossRef]
  14. Allen, T.A.; Salz, D.M.; McKenzie, S.; Fortin, N.J. Nonspatial sequence coding in CA1 neurons. J. Neurosci. 2016, 36, 1547–1563. [Google Scholar] [CrossRef]
  15. Marchant, J.K.; Ferris, N.G.; Grass, D.; Allen, M.S.; Gopalakrishnan, V.; Olchanyi, M.; Sehgal, D.; Sheft, M.; Strom, A.; Bilgic, B.; et al. Mesoscale Brain Mapping: Bridging Scales and Modalities in Neuroimaging—A Symposium Review. Neuroinformatics 2024, 22, 697–706. [Google Scholar] [CrossRef]
  16. Lang, E.W.; Tomé, A.M.; Keck, I.R.; Górriz, J.M.; Puntonet, C.G. Brain Connectivity Analysis: A Short Survey. Comput. Intell. Neurosci. 2012, 2012, 412512. [Google Scholar] [CrossRef]
  17. Shumway, R.H.; Stoffer, D.S.; Stoffer, D.S. Time Series Analysis and Its Applications; Springer: Cham, Switzerland, 2000; Volume 3. [Google Scholar]
  18. Bowyer, S.M. Coherence a measure of the brain networks: Past and present. Neuropsychiatr. Electrophysiol. 2016, 2, 1–12. [Google Scholar] [CrossRef]
  19. You, S.D. Classification of Relaxation and Concentration Mental States with EEG. Information 2021, 12, 187. [Google Scholar] [CrossRef]
  20. Newson, J.J.; Thiagarajan, T.C. EEG Frequency Bands in Psychiatric Disorders: A Review of Resting State Studies. Front. Hum. Neurosci. 2019, 12, 521. [Google Scholar] [CrossRef]
  21. Brillinger, D.R. Time Series: Data Analysis and Theory; SIAM: New Delhi, India, 2001. [Google Scholar]
  22. Talento, M.S.D.; Roy, S.; Ombao, H.C. KenCoh: A Ranked-Based Canonical Coherence. arXiv 2024, arXiv:2412.10521. [Google Scholar]
  23. Rousseeuw, P.J.; Driessen, K.V. A fast algorithm for the minimum covariance determinant estimator. Technometrics 1999, 41, 212–223. [Google Scholar]
  24. Rousseeuw, P.J. Least median of squares regression. J. Am. Stat. Assoc. 1984, 79, 871–880. [Google Scholar]
  25. Kendall, M.G. A new measure of rank correlation. Biometrika 1938, 30, 81–93. [Google Scholar]
  26. Ferguson, T.S.; Genest, C.; Hallin, M. Kendall’s tau for serial dependence. Can. J. Stat. 2000, 28, 587–604. [Google Scholar]
  27. Fang, H.B.; Fang, K.T.; Kotz, S. The meta-elliptical distributions with given marginals. J. Multivar. Anal. 2002, 82, 1–16. [Google Scholar]
  28. Bernholt, T.; Fischer, P. The complexity of computing the MCD-estimator. Theor. Comput. Sci. 2004, 326, 383–398. [Google Scholar]
  29. Kay, L.M.; Beshel, J. A beta oscillation network in the rat olfactory system during a 2-alternative choice odor discrimination task. J. Neurophysiol. 2010, 104, 829–839. [Google Scholar]
  30. Vardi, Y.; Zhang, C.H. The multivariate L 1-median and associated data depth. Proc. Natl. Acad. Sci. USA 2000, 97, 1423–1426. [Google Scholar]
  31. Benjamini, Y.; Hochberg, Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B Methodol. 1995, 57, 289–300. [Google Scholar]
  32. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar]
  33. Kalisch, M.; Bühlmann, P. Causal structure learning and inference: A selective review. Qual. Technol. Quant. Manag. 2014, 11, 3–21. [Google Scholar]
  34. Wang, Y.; Ting, C.M.; Ombao, H. Modeling effective connectivity in high-dimensional cortical source signals. IEEE J. Sel. Top. Signal Process. 2016, 10, 1315–1325. [Google Scholar]
  35. Ting, C.M.; Seghouane, A.K.; Salleh, S.H. Estimation of high-dimensional connectivity in fmri data via subspace autoregressive models. In Proceedings of the 2016 IEEE Statistical Signal Processing Workshop (SSP), Palma de Mallorca, Spain, 26–29 June 2016; pp. 1–5. [Google Scholar]
  36. Zarghami, T.S.; Friston, K.J. Dynamic effective connectivity. Neuroimage 2020, 207, 116453. [Google Scholar]
  37. Siggiridou, E.; Kugiumtzis, D. Dimension reduction of polynomial regression models for the estimation of Granger causality in high-dimensional time series. IEEE Trans. Signal Process. 2021, 69, 5638–5650. [Google Scholar]
  38. Shojaie, A.; Fox, E.B. Granger causality: A review and recent advances. Annu. Rev. Stat. Its Appl. 2022, 9, 289–319. [Google Scholar] [CrossRef]
  39. Wang, Y.S.; Drton, M. High-dimensional causal discovery under non-Gaussianity. Biometrika 2020, 107, 41–59. [Google Scholar]
  40. Basu, S.; Das, S.; Michailidis, G.; Purnanandam, A. A high-dimensional approach to measure connectivity in the financial sector. Ann. Appl. Stat. 2024, 18, 922–945. [Google Scholar]
  41. Brillinger, D.R. The canonical analysis of stationary time series. Multivar. Anal. 1969, 2, 331–350. [Google Scholar]
  42. Hörmann, S.; Kidziński, Ł.; Hallin, M. Dynamic functional principal components. J. R. Stat. Soc. Ser. B Stat. Methodol. 2015, 77, 319–348. [Google Scholar]
  43. Stoffer, D.S. Detecting common signals in multiple time series using the spectral envelope. J. Am. Stat. Assoc. 1999, 94, 1341–1356. [Google Scholar]
  44. Granger, C.W. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc. 1969, 37, 424–438. [Google Scholar] [CrossRef]
  45. Agster, K.L.; Burwell, R.D. Cortical efferents of the perirhinal, postrhinal, and entorhinal cortices of the rat. Hippocampus 2009, 19, 1159–1186. [Google Scholar] [CrossRef]
  46. Zhou, W.; Qu, A.; Cooper, K.W.; Fortin, N.; Shahbaba, B. A model-agnostic graph neural network for integrating local and global information. J. Am. Stat. Assoc. 2024, 1–14. [Google Scholar] [CrossRef]
  47. Schreiber, T. Measuring information transfer. Phys. Rev. Lett. 2000, 85, 461. [Google Scholar]
  48. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  49. Florin, E.; Gross, J.; Pfeifer, J.; Fink, G.R.; Timmermann, L. The effect of filtering on Granger causality based multivariate causality measures. Neuroimage 2010, 50, 577–588. [Google Scholar] [CrossRef]
  50. Barnett, L.; Seth, A.K. Behaviour of Granger causality under filtering: Theoretical invariance and practical application. J. Neurosci. Methods 2011, 201, 404–419. [Google Scholar] [CrossRef]
  51. Seth, A.K.; Barrett, A.B.; Barnett, L. Granger causality analysis in neuroscience and neuroimaging. J. Neurosci. 2015, 35, 3293–3297. [Google Scholar] [CrossRef]
  52. Redondo, P.V.; Huser, R.; Ombao, H. Measuring information transfer between nodes in a brain network through spectral transfer entropy. arXiv 2023, arXiv:2303.06384. [Google Scholar]
  53. Aylwin, M.; Aguilar, G.; Flores, F.; Maldonado, P. Odorant modulation of neuronal activity and local field potential in sensory-deprived olfactory bulb. Neuroscience 2009, 162, 1265–1278. [Google Scholar] [CrossRef]
  54. Carlson, K.S.; Dillione, M.R.; Wesson, D.W. Odor-and state-dependent olfactory tubercle local field potential dynamics in awake rats. J. Neurophysiol. 2014, 111, 2109–2123. [Google Scholar] [CrossRef]
  55. Chery, R.; Gurden, H.; Martin, C. Anesthetic regimes modulate the temporal dynamics of local field potential in the mouse olfactory bulb. J. Neurophysiol. 2014, 111, 908–917. [Google Scholar] [PubMed]
  56. Daubechies, I. Ten Lectures on Wavelets; SIAM: New Delhi, India, 1992. [Google Scholar]
  57. Morlet, J.; Arens, G.; Fourgeau, E.; Giard, D. Wave propagation and sampling theory; Part I, Complex signal and scattering in multilayered media. Geophysics 1982, 47, 203–221. [Google Scholar]
  58. Nason, G.P.; Sachs, R.; Kroisandt, G. Wavelet processes and adaptive estimation of the evolutionary wavelet spectrum. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 2000, 62, 271–292. [Google Scholar]
  59. Park, T.; Eckley, I.; Ombao, H. Estimating Time-Evolving Partial Coherence Between Signals via Multivariate Locally Stationary Wavelet Processes. IEEE Trans. Signal Process. 2014, 62, 5240–5250. [Google Scholar]
  60. Wu, H.; Knight, M.; Ombao, H. Multi-scale wavelet coherence with its applications. arXiv 2023, arXiv:2305.10878. [Google Scholar]
  61. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  62. Barabasi, A.L.; Albert, R. Emergence of Scaling in Random Networks. Science 1999, 286, 509–512. [Google Scholar] [CrossRef]
  63. Sporns, O. Graph theory methods: Applications in brain networks. Dialogues Clin. Neurosci. 2018, 20, 111–121. [Google Scholar] [CrossRef]
  64. Miraglia, F.; Vecchio, F.; Pappalettera, C.; Nucci, L.; Cotelli, M.; Judica, E.; Ferreri, F.; Rossini, P.M. Brain Connectivity and Graph Theory Analysis in Alzheimer’s and Parkinson’s Disease: The Contribution of Electrophysiological Techniques. Brain Sci. 2022, 12, 402. [Google Scholar] [CrossRef]
  65. Han, L.; Chan, M.Y.; Agres, P.F.; Winter-Nelson, E.; Zhang, Z.; Wig, G.S. Measures of resting-state brain network segregation and integration vary in relation to data quantity: Implications for within and between subject comparisons of functional brain network organization. Cereb. Cortex 2024, 34, bhad506. [Google Scholar] [CrossRef]
  66. Jang, H.; Mashour, G.A.; Hudetz, A.G.; Huang, Z. Measuring the dynamic balance of integration and segregation underlying consciousness, anesthesia, and sleep in humans. Nat. Commun. 2024, 15, 9164. [Google Scholar] [CrossRef] [PubMed]
  67. Langer, N.; Pedroni, A.; Jäncke, L. The Problem of Thresholding in Small-World Network Analysis. PLoS ONE 2013, 8, e53199. [Google Scholar] [CrossRef]
  68. Bordier, C.; Nicolini, C.; Bifone, A. Graph Analysis and Modularity of Brain Functional Connectivity Networks: Searching for the Optimal Threshold. Front. Neurosci. 2017, 11, 441. [Google Scholar] [CrossRef]
  69. Centeno, E.G.Z.; Moreni, G.; Vriend, C.; Douw, L.; Santos, F.A.N. A hands-on tutorial on network and topological neuroscience. Brain Struct. Funct. 2022, 227, 741–762. [Google Scholar] [CrossRef]
  70. El-Yaagoubi, A.B.; Chung, M.K.; Ombao, H. Topological Data Analysis for Multivariate Time Series Data. Entropy 2023, 25, 1509. [Google Scholar] [CrossRef]
  71. Lee, H.; Kang, H.; Chung, M.K.; Kim, B.N.; Lee, D.S. Persistent Brain Network Homology From the Perspective of Dendrogram. IEEE Trans. Med. Imaging 2012, 31, 2267–2277. [Google Scholar] [CrossRef]
  72. Wang, Y.; Behroozmand, R.; Johnson, L.P.; Bonilha, L.; Fridriksson, J. Topological signal processing and inference of event-related potential response. J. Neurosci. Methods 2021, 363, 109324. [Google Scholar] [CrossRef]
  73. Saggar, M.; Sporns, O.; Gonzalez-Castillo, J.; Bandettini, P.A.; Carlsson, G.; Glover, G.; Reiss, A.L. Towards a new approach to reveal dynamical organization of the brain using topological data analysis. Nat. Commun. 2018, 9, 1399. [Google Scholar] [CrossRef]
  74. Ghrist, R. Barcodes: The persistent topology of data. Bull. Am. Math. Soc. 2008, 45, 61–75. [Google Scholar] [CrossRef]
  75. Bubenik, P. Statistical Topological Data Analysis Using Persistence Landscapes. J. Mach. Learn. Res. 2015, 16, 77–102. [Google Scholar]
  76. Adams, H.; Emerson, T.; Kirby, M.; Neville, R.; Peterson, C.; Shipman, P. Persistence images: A stable vector representation of persistent homology. J. Mach. Learn. Res. 2017, 18, 1–35. [Google Scholar]
  77. Stramaglia, S.; Faes, L.; Cortes, J.M.; Marinazzo, D. Disentangling high-order effects in the transfer entropy. Phys. Rev. Res. 2024, 6, L032007. [Google Scholar] [CrossRef]
  78. Herzog, R.; Rosas, F.E.; Whelan, R.; Fittipaldi, S.; Santamaria-Garcia, H.; Cruzat, J.; Birba, A.; Moguilner, S.; Tagliazucchi, E.; Prado, P.; et al. Genuine high-order interactions in brain networks and neurodegeneration. Neurobiol. Dis. 2022, 175, 105918. [Google Scholar] [CrossRef]
  79. Santoro, A.; Battiston, F.; Lucas, M.; Petri, G.; Amico, E. Higher-order connectomics of human brain function reveals local topological signatures of task decoding, individual identification, and behavior. Nat. Commun. 2024, 15, 10244. [Google Scholar] [CrossRef]
  80. El-Yaagoubi, A.B.; Chung, M.K.; Ombao, H. Topological Analysis of Seizure-Induced Changes in Brain Hierarchy Through Effective Connectivity. In Proceedings of the Topology- and Graph-Informed Imaging Informatics: First International Workshop, TGI3 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, 10 October 2024; Springer: Cham, Switzerland, 2024; pp. 134–145. [Google Scholar] [CrossRef]
  81. Tank, A.; Covert, I.; Foti, N.; Shojaie, A.; Fox, E.B. Neural granger causality. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4267–4279. [Google Scholar]
  82. Marcinkevičs, R.; Vogt, J.E. Interpretable models for granger causality using self-explaining neural networks. arXiv 2021, arXiv:2101.07600. [Google Scholar]
  83. Cheng, Y.; Yang, R.; Xiao, T.; Li, Z.; Suo, J.; He, K.; Dai, Q. Cuts: Neural causal discovery from irregular time-series data. arXiv 2023, arXiv:2302.07458. [Google Scholar]
  84. Cheng, Y.; Li, L.; Xiao, T.; Li, Z.; Suo, J.; He, K.; Dai, Q. CUTS+: High-dimensional causal discovery from irregular time-series. Proc. AAAI Conf. Artif. Intell. 2024, 38, 11525–11533. [Google Scholar]
Figure 1. Experimental setup, including the trial scheme for in-sequence and out-of-sequence odor presentations and the positioning of tetrodes in the hippocampal CA1 region.
Figure 1. Experimental setup, including the trial scheme for in-sequence and out-of-sequence odor presentations and the positioning of tetrodes in the hippocampal CA1 region.
Entropy 27 00328 g001
Figure 2. The mean correlation matrices of LFP recordings from (a) in-sequence and (b) out-of-sequence trials performed with vanilla. The boxplot in (c) visualizes the distributions of the correlations of LFP recordings between T11 and T21.
Figure 2. The mean correlation matrices of LFP recordings from (a) in-sequence and (b) out-of-sequence trials performed with vanilla. The boxplot in (c) visualizes the distributions of the correlations of LFP recordings between T11 and T21.
Entropy 27 00328 g002
Figure 3. The mean partial correlation matrices of LFP recordings from (a) in-sequence and (b) out-of-sequence trials performed with rum. The boxplot in (c) visualizes the distributions of the correlations of LFP recordings between T5 and T20.
Figure 3. The mean partial correlation matrices of LFP recordings from (a) in-sequence and (b) out-of-sequence trials performed with rum. The boxplot in (c) visualizes the distributions of the correlations of LFP recordings between T5 and T20.
Entropy 27 00328 g003
Figure 4. A sample LFP signal decomposition and spectral power.
Figure 4. A sample LFP signal decomposition and spectral power.
Entropy 27 00328 g004
Figure 5. The average pairwise coherence for the (a) alpha and (b) gamma frequency bands.
Figure 5. The average pairwise coherence for the (a) alpha and (b) gamma frequency bands.
Entropy 27 00328 g005
Figure 6. Spatial arrangement of tetrodes, filtered signals, and their weighted combinations. The spatial arrangement of 20 tetrodes from Buchanan is shown in x-y coordinates, divided into proximal (orange) and distal (brown) sections (left). Filtered signals within the frequency range Ω = ( 12 30 ) Hz from the 20 tetrodes are displayed (middle). Weighted linear combinations of the signals are computed separately for each section (right).
Figure 6. Spatial arrangement of tetrodes, filtered signals, and their weighted combinations. The spatial arrangement of 20 tetrodes from Buchanan is shown in x-y coordinates, divided into proximal (orange) and distal (brown) sections (left). Filtered signals within the frequency range Ω = ( 12 30 ) Hz from the 20 tetrodes are displayed (middle). Weighted linear combinations of the signals are computed separately for each section (right).
Entropy 27 00328 g006
Figure 7. Multivariate spatial median of absolute canonical directions in the 12–30 Hz (beta band) frequency range for in-sequence (top) and out-of-sequence (bottom) odor presentation, for odors A–E.
Figure 7. Multivariate spatial median of absolute canonical directions in the 12–30 Hz (beta band) frequency range for in-sequence (top) and out-of-sequence (bottom) odor presentation, for odors A–E.
Entropy 27 00328 g007
Figure 8. Significant adjusted p-values obtained using the permutation test for the five odors presented in-sequence (left panel) and out-of-sequence (right panel).
Figure 8. Significant adjusted p-values obtained using the permutation test for the five odors presented in-sequence (left panel) and out-of-sequence (right panel).
Entropy 27 00328 g008
Figure 9. Granger causality in a high-dimensional network using the sDPCA approach.
Figure 9. Granger causality in a high-dimensional network using the sDPCA approach.
Entropy 27 00328 g009
Figure 10. GC connectivities across odors A, B, C, D, and E for each subject.
Figure 10. GC connectivities across odors A, B, C, D, and E for each subject.
Entropy 27 00328 g010
Figure 11. Illustration of carrier signals and modulating signals producing the band-specific oscillations and the aggregation to series of maximum amplitude over non-overlapping time blocks for the STE measure.
Figure 11. Illustration of carrier signals and modulating signals producing the band-specific oscillations and the aggregation to series of maximum amplitude over non-overlapping time blocks for the STE measure.
Entropy 27 00328 g011
Figure 12. Distribution of S T E α ( X q X p ) (alpha band) across all relevant “distal–proximal”-tetrode pairs for correct and incorrect in-sequence trials of subjects (a) Superchris presented with the rum odor, and (b) Mitt presented with the lemon odor.
Figure 12. Distribution of S T E α ( X q X p ) (alpha band) across all relevant “distal–proximal”-tetrode pairs for correct and incorrect in-sequence trials of subjects (a) Superchris presented with the rum odor, and (b) Mitt presented with the lemon odor.
Entropy 27 00328 g012
Figure 13. Distribution of S T E β ( X q X p ) (beta band) across all relevant “distal–proximal”-tetrode pairs for correct and incorrect in-sequence trials of subjects (a) Superchris presented with the rum odor, and (b) Mitt presented with the lemon odor.
Figure 13. Distribution of S T E β ( X q X p ) (beta band) across all relevant “distal–proximal”-tetrode pairs for correct and incorrect in-sequence trials of subjects (a) Superchris presented with the rum odor, and (b) Mitt presented with the lemon odor.
Entropy 27 00328 g013
Figure 14. Non-stationary signal and its wavelet decomposition. The top panel displays a non-stationary signal, while the bottom panel shows multiple wavelet functions obtained by scaling and shifting a base wavelet. These wavelets act as time-localized filters that capture different features of the signal at various scales.
Figure 14. Non-stationary signal and its wavelet decomposition. The top panel displays a non-stationary signal, while the bottom panel shows multiple wavelet functions obtained by scaling and shifting a base wavelet. These wavelets act as time-localized filters that capture different features of the signal at various scales.
Entropy 27 00328 g014
Figure 15. The single-scale coherence among the channels in LFP of Superchris; the first row is results based on trials with correct response in behavior test, and the second row corresponds to the wrong response.
Figure 15. The single-scale coherence among the channels in LFP of Superchris; the first row is results based on trials with correct response in behavior test, and the second row corresponds to the wrong response.
Entropy 27 00328 g015
Figure 16. The cross-scale coherence among the given pair in LFP of Superchris corresponds to the trials that have a correct response and a wrong response, respectively.
Figure 16. The cross-scale coherence among the given pair in LFP of Superchris corresponds to the trials that have a correct response and a wrong response, respectively.
Entropy 27 00328 g016
Figure 17. The p-values based on the permutation test for cross-scale coherence among the given pair of LFP in different trials of Superchris.
Figure 17. The p-values based on the permutation test for cross-scale coherence among the given pair of LFP in different trials of Superchris.
Entropy 27 00328 g017
Figure 18. The average time-varying cross-scale wavelet coherence between the subprocess at specific channels and scales across corresponding trials.
Figure 18. The average time-varying cross-scale wavelet coherence between the subprocess at specific channels and scales across corresponding trials.
Entropy 27 00328 g018
Figure 19. Two examples of Vietoris–Rips filtrations on point-cloud data, along with their corresponding persistence diagrams. The top example shows two distinct clusters, and the bottom example features a single cycle. The four columns on the right illustrate ball coverings (and their nerves) at increasing radii. Persistence diagrams plot birth–death pairs ( b , d ) for topological features, with different dimensions represented by distinct colors.
Figure 19. Two examples of Vietoris–Rips filtrations on point-cloud data, along with their corresponding persistence diagrams. The top example shows two distinct clusters, and the bottom example features a single cycle. The four columns on the right illustrate ball coverings (and their nerves) at increasing radii. Persistence diagrams plot birth–death pairs ( b , d ) for topological features, with different dimensions represented by distinct colors.
Entropy 27 00328 g019
Figure 20. Persistence homology based on coherence matrices. The first row shows coherence matrices for Trial 1 (0–12 Hz, 12–30 Hz) and Trial 100 (0–12 Hz, 12–30 Hz), from left to right. The second row displays the corresponding persistence diagrams.
Figure 20. Persistence homology based on coherence matrices. The first row shows coherence matrices for Trial 1 (0–12 Hz, 12–30 Hz) and Trial 100 (0–12 Hz, 12–30 Hz), from left to right. The second row displays the corresponding persistence diagrams.
Entropy 27 00328 g020
Figure 21. Topological analysis of partial correlation dependence. The top row presents the full Wasserstein distance matrices for dimensions 0 (left) and 1 (right). The bottom row displays averages over trials for each odor, illustrating variability in connected components and cycles across odors. For dimension 0, the first odor (lemon) shows reduced variability, while dimension 1 highlights minimal changes for the second odor (anise).
Figure 21. Topological analysis of partial correlation dependence. The top row presents the full Wasserstein distance matrices for dimensions 0 (left) and 1 (right). The bottom row displays averages over trials for each odor, illustrating variability in connected components and cycles across odors. For dimension 0, the first odor (lemon) shows reduced variability, while dimension 1 highlights minimal changes for the second odor (anise).
Entropy 27 00328 g021
Figure 22. Low-frequency range (0–12 Hz). Wasserstein distance matrices for coherence-based analysis in dimension 0 across three frequency bands: delta, theta, and beta (left to right). The top row shows full matrices, while the bottom row presents averages over trials for each odor. The lemon odor exhibits lower variability in the delta and theta bands but not in the beta band.
Figure 22. Low-frequency range (0–12 Hz). Wasserstein distance matrices for coherence-based analysis in dimension 0 across three frequency bands: delta, theta, and beta (left to right). The top row shows full matrices, while the bottom row presents averages over trials for each odor. The lemon odor exhibits lower variability in the delta and theta bands but not in the beta band.
Entropy 27 00328 g022
Figure 23. Medium-frequency range (12–30 Hz). Wasserstein distance matrices for coherence-based analysis in dimension 1 across three frequency bands: delta, theta, and beta (left to right). Similar to dimension 0, the top row shows full matrices, and the bottom row provides averages over trials for each odor. Changes in cyclic patterns appear relatively consistent across odors and frequency bands.
Figure 23. Medium-frequency range (12–30 Hz). Wasserstein distance matrices for coherence-based analysis in dimension 1 across three frequency bands: delta, theta, and beta (left to right). Similar to dimension 0, the top row shows full matrices, and the bottom row provides averages over trials for each odor. Changes in cyclic patterns appear relatively consistent across odors and frequency bands.
Entropy 27 00328 g023
Table 1. Details of the trials conducted on the five subjects.
Table 1. Details of the trials conducted on the five subjects.
SubjectNo. of TetrodesIn-SequenceOut-of-Sequence
Correct Incorrect Correct Incorrect
Barat2215411110
Buchanan20203232915
Mitt22230321614
Stella2117618235
Superchris2119020264
Table 2. Comparative summary of methods for brain connectivity analysis.
Table 2. Comparative summary of methods for brain connectivity analysis.
MethodAdvantagesLimitations
KenCoh
  • Looks beyond pairwise association.
  • Robust to outliers.
  • The estimator has closed-form expression, unlike its robust alternatives.
  • Computationally efficient.
  • Assumes stationarity of the time series.
  • The vector of random amplitudes is assumed to have elliptic density.
  • The components of the vector of random amplitudes are assumed to have equal variances.
sDPCA-GC
  • Preserves key oscillatory patterns via frequency-aware reduction.
  • Permits standard GC for interactions between nodes-of-interest.
  • Straightforward to implement and interpret once components are derived.
  • Robust to moderate noise.
  • Selecting the number of principal components is non-trivial.
  • Assumes linearity and stationarity.
  • Physiological interpretation of dynamic principal scores is not straightforward.
STE
  • Captures nonlinear information transfer with minimal assumptions.
  • Straightforward to link results to cognitive neuroscience.
  • Simple and computationally efficient estimation.
  • Robust to spontaneous noise artifacts.
  • Assumes stationarity of signals.
  • Temporal resolution of causality is limited by the aggregation over time blocks.
WaveletCoh
  • Maintains time and scale information.
  • Effective for capturing time-varying statistical properties within non-stationary time series.
  • Scale does not perfectly correspond to specific frequency bands, making interpretations challenging.
TDA-PH
  • Avoids thresholding weighted networks.
  • Ability to identify complex topological patterns.
  • Considers higher-order interactions.
  • Robust to moderate noise.
  • Computationally intensive, especially for large networks with thousands of nodes.
  • Global level results that can be hard to interpret.
  • Sensitive to outliers.
  • Cannot handle directed networks.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Yaagoubi, A.B.; Aslan, S.; Gomawi, F.; Redondo, P.V.; Roy, S.; Sultan, M.S.; Talento, M.S.; Tarrazona, F.T.; Wu, H.; Cooper, K.W.; et al. Methods for Brain Connectivity Analysis with Applications to Rat Local Field Potential Recordings. Entropy 2025, 27, 328. https://doi.org/10.3390/e27040328

AMA Style

El-Yaagoubi AB, Aslan S, Gomawi F, Redondo PV, Roy S, Sultan MS, Talento MS, Tarrazona FT, Wu H, Cooper KW, et al. Methods for Brain Connectivity Analysis with Applications to Rat Local Field Potential Recordings. Entropy. 2025; 27(4):328. https://doi.org/10.3390/e27040328

Chicago/Turabian Style

El-Yaagoubi, Anass B., Sipan Aslan, Farah Gomawi, Paolo V. Redondo, Sarbojit Roy, Malik S. Sultan, Mara S. Talento, Francine T. Tarrazona, Haibo Wu, Keiland W. Cooper, and et al. 2025. "Methods for Brain Connectivity Analysis with Applications to Rat Local Field Potential Recordings" Entropy 27, no. 4: 328. https://doi.org/10.3390/e27040328

APA Style

El-Yaagoubi, A. B., Aslan, S., Gomawi, F., Redondo, P. V., Roy, S., Sultan, M. S., Talento, M. S., Tarrazona, F. T., Wu, H., Cooper, K. W., Fortin, N. J., & Ombao, H. (2025). Methods for Brain Connectivity Analysis with Applications to Rat Local Field Potential Recordings. Entropy, 27(4), 328. https://doi.org/10.3390/e27040328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop