Next Article in Journal
QoS-Aware Cloud Service Recommendation Using Metaheuristic Approach
Next Article in Special Issue
Efficient Distributed Mapping-Based Computation for Convolutional Neural Networks in Multi-Core Embedded Parallel Environment
Previous Article in Journal
Survey on Multi-Path Routing Protocols of Underwater Wireless Sensor Networks: Advancement and Applications
Previous Article in Special Issue
Embedded Real-Time Implementation of Bio-Inspired Central Pattern Generator with Self-Repairing Function
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Multiscale Brain Network Models and Their Applications in Neuropsychiatric Diseases

School of Information Technology Engineering, Tianjin University of Technology and Education, Tianjin 300222, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(21), 3468; https://doi.org/10.3390/electronics11213468
Submission received: 28 September 2022 / Revised: 16 October 2022 / Accepted: 21 October 2022 / Published: 26 October 2022

Abstract

:
With the rapid development of advanced neuroimaging techniques, understanding the brain in terms of structural and functional connectomes has become one of the frontier topics in neuroscience. Different from traditional descriptive brain network models, which focused on single neuroimaging modal and temporal scales, multiscale brain network models consisting of mesoscopic neuronal activity and macroscopic functional dynamics can provide a mechanistic understanding for brain disorders. Here, we review the foundation of multiscale brain network models and their applications in neuropsychiatric diseases. We first describe some basic elements of a multiscale brain network model, including network connections, dynamics of regional neuronal populations, and model fittings to different metrics of fMRI. Secondly, we draw comparisons between multiscale brain network models and other large-scale brain models. Additionally, then we survey the related applications of multiscale brain network models in understanding underlying mechanisms of some brain disorders, such as Parkinson’s disease, Alzheimer’s disease, and Schizophrenia. Finally, we discuss the limitations of current multiscale brain network models and future potential directions for model development. We argue that multiscale brain network models are more comprehensive than traditional single modal brain networks and would be a powerful tool to explore neuronal mechanisms underlying different brain disorders measured by neuroimaging.

Graphical Abstract

1. Introduction

The causes of many brain disorders are extremely complicated and still under debate. It is very difficult to identify the potential mechanisms underlying the symptoms of brain disorders only through experimental studies. Consequently, the demands toward the development of mathematical and computational models are increasing. Computational models can be used precisely for replicating experimental results and testing clinical hypotheses and are indispensable in neuroscience [1].
The human brain is the most complex network and consists of billions of interconnected neurons; brain functions (e.g., cognitive, memory, perception et al.) are achieved through interactions of neurons across multi-temporal and spatial scales. Thus, modeling of the brain ranges between microscale (single neurons), mesoscale (a group of neurons), and macroscale (large-scale brain network). For example, models of single neurons include the classical Hodgkin-Huxley model and its simplified variants as FitzHugh-Nagumo, Morris-Lecar Models et al. [2,3,4,5]. Models of neuronal populations differ in their complexity, one example rests on detailed biophysical properties, such as Terman and Rubin developed a conductance-based computational network model of the basal ganglia based on the physical anatomy of the brain [6]. Due to its physiological plausibility, the basal ganglia network has been used in many studies to explore the emergence of correlated oscillatory activity in the subthalamic circuit after the destruction of dopaminergic neurons or the optimization of deep brain stimulation in Parkinson’s disease [7,8,9,10]. These kinds of models provide rational analyses of complex behaviors but are constrained by their high-dimension properties that cause network dynamics to be intractable [11]. To overcome this problem, another modeling method using averaged values of neuronal activities is presented to describe the activity of collective neural behavior, such as the mean membrane potential or the firing rate of populations. These simplified models include the Fokker-Planck equation, neural mass model, Wilson-Cowan model, Jansen-Rit model, and Kuramoto oscillator et al. [12,13,14,15]. In the macroscale, early research suggested that certain brain functions were associated with specific brain regions, while a growing body of evidence indicates that brain functions emerge from interactions between spatially distributed areas of large-scale networks. Whole-brain network models may lead to a better understanding of brain functions and dysfunctions [16,17]. Previous whole-brain models focused on topological properties of networks extracted from only single modality and have been promoted by network neuroscience as well as graph theory [18,19]. These descriptive models are quite useful in identifying the differences between normal and pathological brain networks and have been used as a powerful tool to identify biomarkers for classifying and predicting brain disorders. However, they failed to explain the origins of the differences from a dynamic point of view. Motivated by the hierarchical structure of the brain, multiscale brain network models (BNMs) combining nonlinear dynamical theory with network neuroscience have attracted more attention.
Multiscale BNMs typically use the structural connectivity (SC) derived from diffusion-weighted MRIs or T1-weighted MRIs as network edges and mathematical equations simulating collective behaviors of neuronal populations as network nodes. Then, the population dynamics of each node coordinate with one another through SC is fitted to functional dynamics extracted from EEG, MEG, or fMRI to obtain plausible model parameters. Within this framework, multiscale BNMs can not only be used to estimate the relationship between structural and functional connectivity over time or synthetical disease states, but can also be used to infer microscopic neuronal underpinning supporting macroscopic brain dynamics. These models have a tremendous potential to provide mechanistic explanations and exploit feasible biomarkers for understanding and predicting brain disorders. While currently there is no golden standard for how to construct an optimal multiscale BNM, the specific model should be task dependent and built prudently. A deeper understanding of the methodological considerations for implementing a multiscale BNM may be helpful when confronting specific tasks. Thus, in this paper, we aim to provide a deeper understanding of multiscale BNMs by introducing methodological considerations in the modeling pipeline and comparing with other large-scale brain models. Specifically, during the process of modeling, model fitting to empirical data is a critical step toward understanding mechanisms underlying empirical observations. Thus, we introduce in detail machine learning methods in model fitting. We also collect information for applications of multiscale BNMs in brain disorders including Parkinson’s disease, Alzheimer’s disease, and Schizophrenia to provide reference to the literature. Finally, we highlight the current challenges in modeling multiscale BNMs and discuss the future direction of multiscale BNMs in revealing neuronal dynamic mechanisms underlying brain functional dynamics.

2. Methods for Implementing Multiscale BNMs

Multiscale BNMs are based on the theory of network neuroscience, which offers a mathematical framework to describe brain structure and function. Within this framework, each regional brain dynamics is modeled by biologically inspired mathematical equations, and connections between these regions are often estimated from diffusion-weighted MRIs or T1-weighted MRIs. With the rapid development of advanced neuroimaging techniques, in most cases, validation of the BNMs is achieved by comparing the model output with the empirical resting-state fMRI in terms of functional connectivity (FC). In recent years, increasing studies suggests that dynamical FC may be more suitable for capturing higher-order dynamics in neuroimaging data [20,21]. The representation of model fitting also moves from averaged FC to dynamical FC, even to direct fMRI time-series. In the following, the modeling pipeline for implementing a multiscale BNM is summarized in detail including estimating structural network connections, choosing models for simulating node dynamics, and fitting models to different metrics of fMRI.

2.1. Estimation of Structural Network Connections

The anatomical (structural) connectivity of the brain is a physical foundation for supporting complex brain functional dynamics and it plays an important role in determining how networks transfer information between distinct brain regions [22]. Basically, the SC is quantified through different methods in nonhuman primates and the human cortex [23,24,25,26,27], but both function through the choice of a brain parcellation and the inferring of connections to link nodes.

2.1.1. Brain Parcellation

The choice of a brain parcellation scheme is used to define brain regions-of-interest (ROIs ornode). Current ROIs (nodes) are typically derived either through a sophisticated processing pipeline or through a pre-defined atlas, such as an AAL (Automated Anatomical Labeling) atlas, Hagmann atlas, Desikan-Killiany (DK) atlas et al., to divide the image data into distinct regions. Typically, the AAL atlas divides the brain into 116 regions, while the Desikan-Killiany atlas or Hagmann atlas divide the brain into 66 regions. Although these atlases are used a lot, few of them are considered carefully on their appropriateness [28,29]. Brain parcellation is a methodologically challenging and active research field, there is currently no consensus regarding which brain parcellation schemes are optimal for defining the ROIs [30,31,32]. More detail about atlas classification and evolution can be found in [33]. As a parcellation scheme may strongly impact the topological properties of brain networks, it should be chosen carefully.

2.1.2. Inferring of Connections

Connections or edges can be estimated on diffusion-weighted MRIs (DWI), diffusion tensor imaging (DTI), or T1-weighted MRIs through a professional software package. For DWIs/DTI, the edges are estimated from the number, density, or diffusion properties of white matter tracts. While for T1-weighted MRIs, the edges represent the cortical thickness, gray matter volume, or surface area. No matter which method, the SC is often represented by a 2D adjacency matrix and has no direction [34]. There is another way to solve the directional problem of edges by estimating the effective connectivity [35,36,37].
In models, the elements of the SC matrix can be binary or weighted. Typically, a binary connection is obtained by selecting a threshold such that any edge smaller than the threshold is set to zero or otherwise set to one. Weighted connections may represent size, density, or coherence of anatomical connections. Now, weighted connections are increasingly adopted due to their plausible biological information, but there are many challenges to extract topological measures from them. One can refer to [38] for more information about the challenges of and opportunities in mapping SC with diffusion-weighted MRIs.

2.2. Selection of Node Dynamics

Each node in multiscale BNMs simulates a parceled brain region that consists of thousands of neurons and its dynamics are modeled using differential equations that specify the temporal evolution of the population activity [32]. Forrester et al. indicated that local node dynamics can have important effects on large-scale brain states [39]. For instance, some studies suggest that simulated FC is strongly resembling SC when regional activity is close to a phase transition [40] and that FC is strongly in agreement with SC when regional activity is near Hopf bifurcations [41]. Models of nodes differ in their level of detail and correspondence of their parameters to neurophysiology and can basically be divided into biophysical neural mass models or simplified abstract models. Some commonly used models to simulate node dynamics are given in Figure 1.

2.2.1. Computational Models of Node Dynamics

  • Biophysical neural mass models
Many disease-related states result from neurobiological alterations. To reveal and reveal underlying mechanisms, it is advantageous for node models to capture greater biophysical fidelity, interpretable parameters, and dynamical regimes in correspondence with in vivo neural activity [37]. The simulation of each node as a large group of spiking neurons is computationally expensive, thus several early studies of BNMs used a reduced excitable neural system, such as the conductance-based neuronal model [42,43] and the FitzHugh-Nagumo neuronal model [44]. The other way to model node dynamics is by adopting neural mass models (NMMs), which are obtained by modeling the response of the neuronal system to its inputs. NMMs can track aggregate activities of synaptic activity and firing rates in populations of spiking neurons, which makes them computationally tractable and matched to the coarse spatiotemporal resolution of neuroimaging modalities [16,37,45]. The biophysically constrained dynamics and recurrent connection strengths between excitatory pyramidal neurons and inhibitory interneurons are incorporated in NMMs which make them suitable for linking synaptic-level disruptions to emergent system-level neural activity [46,47]. Empirically informed NMMs include the Wilson-Cowan and Jansen-Rit models [48,49]. In these models, the biophysically constrained dynamics and recurrent connections between excitatory pyramidal neurons and inhibitory interneurons cause them to be amenable in linking synaptic-level disruptions to emergent system-level neural activities. For instance, these models can be used to infer the effects of excitatory-inhibitory balances or neuronal oscillations to large-scale brain dynamics. Although biophysical models provide interpretable physiological meaning, they are stuck by their complex mathematical equations. Take the typical Jansen-Rit model for example, each of the neuronal populations in the Jansen-Rit model is denoted by two state variables: the average membrane potential y(t) and average firing rate x(t). The average membrane potential can be transformed into the average firing rate using a nonlinear pre-synaptic function σ(y) [17]:
σ y t = 2 e 0 1 + e r v 0 y t
where the e0 means half of the maximum firing rate, v0 indicates potential when half of the maximum firing rate is achieved, and r denotes the slope of the pre-synaptic function at v0. The membrane potentials are modeled by a convolution operation with an alpha kernel and can be expressed via a second-order differential linear transformation [17]:
y ¨ t = A a x t 2 a y ˙ t a 2 y t
where x ( t ) means the output of Equation (1), i.e., average firing rate. A presents the average excitatory synaptic gain, and a indicates the time constant of excitatory post-synaptic-potential (PSP). For simplification, a second-order differential equation can be rewritten equivalently as [17]:
y ˙ ( t ) = z ( t ) z ˙ ( t ) = A a x ( t ) 2 a z ( t ) a 2 y ( t )
as the Jansen-Rit model characterizes the dynamics of three populations, y0 denotes the average membrane potential of the pyramidal cells and y1, y2 indicates the excitatory and inhibitory interneurons, respectively. Then, the dynamic interactions between those three populations of the Jansen-Rit model can be expressed using six coupled ordinary differential equations that are composed of the two operators defined in Equation (3) [17]:
y ˙ 0 ( t ) = y 3 ( t ) y ˙ 3 ( t ) = A a σ ( y 1 ( t ) y 2 ( t ) ) 2 a y 3 ( t ) a 2 y 0 ( t ) y ˙ 1 ( t ) = y 4 ( t ) y ˙ 4 ( t ) = A a p ( t ) + C 2 σ ( C 1 y 0 ( t ) ) 2 a y 4 ( t ) a 2 y 1 ( t ) y ˙ 2 ( t ) = y 5 ( t ) y ˙ 5 ( t ) = B b C 4 σ ( C 3 y 0 ( t ) ) 2 b y 5 ( t ) b 2 y 2 ( t )
where B is the average inhibitory synaptic gain, b indicates the time constant of inhibitory PSP, Cn indicates the average number of synaptic contacts, and p(t) means excitatory input noise (mean Gaussian white noise).
  • Simplified abstract models
The parameters of abstract models do not have a plausible biophysical meaning, neither does their dimensionality. Commonly used abstract models include phase oscillators, limit-cycle oscillators [50,51], and normal form Hopf bifurcation models or Andronov-Hopf bifurcation models [52,53,54,55]. The choice of simplified models depends on research interest. To elaborate, models of oscillators are usually used to simulate synchronous brain interactions on the basis of SC as well as reveal the effects of anatomical connection on functional dynamics [56]. A typical oscillator is the Kuramoto model, which uses a single-phase variable to characterize node dynamics that evolve according to a natural frequency as well as a sinusoidal interaction term [57]. Meanwhile, bifurcation models are often adopted to describe the transition from asynchronous to synchronous behavior or vice versa and investigate varied brain states according to a bifurcation parameter [58]. The commonly used bifurcation model is the Stuart Landau equations; without coupling, the node dynamics can be modeled by the Stuart Landau equations as [58]:
d z j d t = a + i ω z j z j z j 2
where zj is a complex-valued variable and w is the intrinsic oscillation frequency of node j.

2.2.2. Heterogeneity between Node Dynamics

At present, most BNMs treat nodes as homogeneous and simulate all node dynamics with identical parameters. However, previous areal parcellations were based on regional variations in cytoarchitecture and myeloarchitecture [59], which suggested that nodes should be heterogeneous. There are studies showing how that incorporating the heterogeneity of local node dynamics may improve the similarity between simulations and empirical data. For example, Deco et al. assigned multiple natural frequencies to nodes to make a BNM that better conforms to empirical rsMEG networks [60]. Demirtas et al. developed a large-scale cortical circuit model incorporating hierarchical heterogeneity of local microcircuit properties to be fitted into multimodal human neuroimaging data [61].

2.2.3. Dynamical System Theory of Node Dynamics

Complex large-scale brain dynamics are rooted in the intrinsic dynamical properties of neuronal populations, here we enumerate several principles of nodes dynamics that support large-scale spatiotemporal activity. Nodes are usually simulated by an averaged firing rate or membrane potential, which are named states of nodes in dynamical system regimes. The states of all nodes span a high-dimensional phase space where each combination of the system’s states is denoted as a point and points in the entire time interval constitute trajectories of the system. In terms of the different properties of trajectories in steady states, the system can be depicted in different dynamical regimes. Some critical regimes and nonlinear dynamics are stated in the following:
  • Multistability: In some systems, two or more stable states coexist for the same parameter. Such a system is called a multistable system. The multistability of node dynamics can be used to explain the slow fluctuations underlying dynamical FC. Specifically, if there are only two stable states (a fixed-point and a limit cycle), the system can be assumed to be bistable.
  • Metastability: In a metastability regime, no stable states exist. Additionally, the system moves along only intermittently stable state sequences.
  • Bifurcation: The system changes from a stable state to another and this qualitative change is caused by a parameter of the system. In terms of the stability property after bifurcation, bifurcation can be classified as supercritical when stable objects appear and as subcritical when unstable objects emerge after bifurcation.
  • Fixed-point: It refers to a steady state in a system or a stable equilibrium.
  • Limit-cycle: It is a simple closed orbit, which means the system is periodically oscillating.

2.3. Model Fitting

The choice of node dynamics and estimation of connections constitute the skeleton of BNMs and further steps are required to validate the specific BNMs to find optimal working parameters. Validation of multiscale BNMs can relate the observed brain activity to the dynamical repertoire of the computational model, which leads to a better mechanistic interpretation of the observations. For example, Schirner et al. inferred multiscale neurophysiological mechanisms underlying neuroimaging signals by using the hybrid brain network model to simulate a resting-state fMRI [62]. Won et al. studied the emergence of metastable dynamics in functional brain organization via spontaneous fMRI signals and BNMs [63].
Typically, if a BNM is used to simulate an EEG, then biophysically informed forward models are required to realize the mapping from currents in the folded cortex to fluctuating extradural EEG potentials. Likewise, if the BNM is used to model an fMRI, the hemodynamic response functions are used to filter time series to simulate BOLD signals. Studies indicated that a resting-state fMRI can yield new perspectives when investigating large cohorts and patient populations as well as establishing data-sharing in the field of neuroimaging. Thus, the application of BNMs to resting-state fMRI data has become a very active area. The fitting of BNMs to empirical fMRI data is a process for identifying optimal working points or model parameters. Studies show that by choosing proper parameters, the maximal correspondence between simulated and empirical data can emerge when the model is close to a critical point at which the system may undergo a bifurcation [30]. A typical parameter of BNMs is the global coupling strength, which plays a critical role in shaping network dynamics [64]. A global decrease in the coupling strength may impact the dynamical working point of the model, as presented in [48], and the best fit of empirical resting FC matrices is obtained when the BNM is in a subcritical working point.
In the context of BNMs, the neural activity of each node can generate time-series that are maximally fitted to the empirical fMRI BOLD signals and the maximum fitness can be measured by minimizing the dissimilarity between the model-driven metrics and metrics extracted based on the empirical BOLD signals. Commonly used measures of interest include time-averaged FC, dynamic FC, and direct fMRI BOLD time-series that will be stated in detail in the following. Some other measures also have proven their effectiveness, such as whole-brain modularity, instantaneous phase, and macroscopic coherence et al. [65].

2.3.1. Time-Averaged Functional Connectivity

Time-averaged FC (static FC) is the simplest metric to characterize the coordination between simulated and empirical fMRI data. FC is the statistical correlation between different brain regions in the entire time interval and may be symmetrical (e.g., Pearson’s correlation coefficient) or asymmetrical (e.g., partial correlation coefficient) [21]. The degree of agreement between simulated and empirical FC is often measured by the Euclidean distance [53,66]. Additionally, a pipeline for fitting time-averaged FC is given in Figure 2. In the process of model fitting, the grid search is the most common method to choose optimal model parameters. Recently, Wischnewski et al. have assessed the performance of four optimization schemes: Nelder-Mead Algorithm, Particle Swarm Optimization, Covariance Matrix Adaptation Evolution Strategy, and Bayesian Optimization, and proposed the Covariance Matrix Adaptation Evolution Strategy as well as Bayesian Optimization as efficient alternatives to a high-dimensional grid search [67].
Recent results show that model fitting is dependent on types and states of models. For oscillatory models, the best fit between simulated and empirical time-averaged FC is observed at the dynamic working point where metastability is maximized [44,68,69]. For asynchronous models, the simulated FC best matches the empirically observed FC when the system is at the operating point where multistability is around a spontaneous state [19,42,70].

2.3.2. Dynamic Functional Connectivity

Many works have highlighted the nonstationarities that depict time-dependent FC dynamics. Analysis of FC data has moved beyond time-averaged to dynamic methods that assume the coordination of brain activity changes over time [20,71,72]. Dynamic FC reflects time variations of correlations between pairs of brain regions and is characterized by the ongoing switching of network patterns [73,74]. The Kolmogorov–Smirnov distance is often used to quantify the difference between simulated dynamic FC and empirical dynamical FC [65,75].
The sliding window method is the most used technique to compute correlations between time-varying FC [73,76], as given in Figure 3. We also noticed that the method of sliding windows has some limitations as the observed neural processes are highly dependent on the pre-defined temporal width of the window and the length of the step. A long window will limit the visibility of the fast timescale and a short window length will provide insufficient data for a reliable network estimation. Thus, some studies used the hidden Markov model (HMM) to describe brain activity as a dynamic sequence of discrete brain states [77], then the BNM fitting can be achieved by comparing brain states identified by HMM.

2.3.3. Direct fMRI BOLD Time-Series

It is obvious that fitting simulated signals directly to an empirical fMRI time-series is useful in developing more realistic simulations of whole-brain activity. Neural activity in each node of the BNMs can generate simulated BOLD signals using the Balloon-Windkessel hemodynamic model, enabling direct comparison with the fMRI time-series [78,79]. Schirner et al. used exhaustive searches to tune three global parameters to produce the highest fit between the empirical region-average fMRI time-series and corresponding simulated time series [62]. In high-dimensional models, the grid search is time-consuming. Naturally, some intelligent algorithms are presented to improve model fitting.
BNMs have done well in fitting the time-averaged FC, but they are variable at reproducing transient dynamic properties that occur at shorter timescales because BNMs are not synchronized with actual measurements. Then, Kashyap et al. presented a brain network autoencoder framework in which the BNM is in conjunction with a long short-term memory neural network (LSTM). As given in Figure 4, the measures are input into the LSTM to be constrained to a smaller space defined by BNM, and the next time point is reconstructed through a BNM forward equation. Then, the mismatch between the reconstructed next time point and the empirical signal is fed into the machine learning system to fit the model to an fMRI time-series [80].
The above machine learning architecture allowed the BNM to generate more realistic brain data compared with the traditional BNM, but it also brought unknowns into the dynamics and caused it to be impossible to evaluate the BNM independently. Later, Kashyap et al. presented an alternative deep learning method named the Neural Ordinary Differential Equations (ODE), which is similar to the method given in Figure 3 [81,82]. The Neural ODE method uses a sequence of observations and a given dynamical system to estimate the initial conditions of the BNM so that the BNM’s simulated dynamics are synchronized closely to the potential trajectory relative to the observed data. These deep learning methods unquestionably provide an alternative way of model fitting between BNMs and empirical data, yet they also have some limitations, such as the large amounts of computational complexity and lack of robustness. Thus, more time-efficient and robust algorithms are preferred.

3. Comparison between Multiscale BNMs and Other BNMs

3.1. Comparison between Multiscale BNMs and Descriptive BNMs

3.1.1. Descriptive BNMs

The human brain is an intricated nonlinear system where many different connections exist. In a microscopic view, the connections indicate synaptic couplings between neurons. Additionally, in a macroscopic view, the connections indicate anatomical connectivity or interactions between distinct brain regions. These different scales and modalities connections support complex cognitive functions of the brain. Early studies focused on finding the key interconnected features of the brain and identifying the differences of those features between people with different cognitive functions. SC and FC were the primary descriptive tools to represent brain networks. SC networks extracted from structural MRI data depict the physical connections between brain regions and FC networks computed by correlation between time-series of paired brain regions characterize the temporal coincidence of spatially distributed regions. As a variant of FC, EC can also be used to describe the brain network. For the analysis of descriptive brain networks, a mathematical tool named graph theory is introduced to brain networks and consequently boots the rapid development of network neuroscience. In the graph theory regime, metrics such as degree distribution, clustering coefficient, path-length, hub, and community are typically used to distinguish people with different ages, disease states, or under the effects of different drugs. The detailed definitions of these metrics can be found in [83,84,85].

3.1.2. Comparison

Graph theory-based descriptive networks provide invaluable help in identifying people with cognitive differences, but they also have some limitations such as they are constructed from single modality data and cannot link SC with FC directly. Meanwhile, they have a lack of time description and cannot capture the dynamical properties of the brain, while multiscale BNMs are capable of integrating multimodal data from structures and functional neuroimaging and exploiting the relationship between SC and FC directly. Furthermore, multiscale BNMs use dynamics of neuronal population to model the network node and bridge the gap between neuronal activity such as the averaged firing rate or membrane potential with macroscopic brain dynamics such as FC or dynamical FC. Comparing with descriptive network models, multiscale BNMs can capture the temporal dynamics and multiscale properties of the brain and can be used to reveal the neuronal mechanisms underlying brain functional dynamics through nonlinear dynamical theory. The detailed comparison between multiscale BNMs and descriptive BNM can be found in Table 1.

3.2. Comparison between Multiscale BNMs and Dynamic Causal Modeling

3.2.1. Dynamical Causal Modeling

SC and FC afford useful insights into the structure and dynamics of the brain but they are unidirectional. Another connectivity measure named effective connectivity (EC) refers to the directed interactions among brain regions and can describe how the activity of local network nodes transforms into macroscopic observations [86]. Dynamical causal modeling (DCM) is an analysis method that infers the EC at a single subject level using standard variational Laplace procedures [87,88]. A typical DCM contains a forward model that depicts the dynamics of interacted neuronal populations and a measurement model that transforms the neuronal activity into measurable observations such as fMRI, MEG, or EEG [89]. Additionally, the basic idea behind DCM is that neural activity propagates across brain networks in the order of input-state-output, where causal interactions are modulated by unobservable (hidden) neuronal dynamics [90]. Thus, DCM can be thought of as a generative framework to infer hidden neuronal states from observed neuroimaging data. Taking fMRI for example, DCM describes the dynamics in neuronal activity as a function of the EC between neuronal ensemble in the form:
d x d t = A + j = 1 m u j B j x + C u
where A denotes the network connection in the absence of an endogenous input, B indicates the perturbations of the endogenous input u , and C represents how the input directly influences neuronal activity. The above model is coupled to a hemodynamical forward model to simulate BOLD time-series [91,92]. Then, EC can be inferred by the inversion of this generative model through a variational Bayesian approach under the Laplace approximation [93].
The primary DCM application to fMRI is only in small networks (in the order of 10 regions) and cannot scale to large networks for several reasons: First, estimating the likelihood function of DCM requires the integration of Equation (6) and the hemodynamic model, which is computationally costly. Second, the classical DCM deals with the data as a region * time-series vector and the error covariance matrix can become extremely large with increased numbers of regions. Thus, modified DCM is presented, such as the regression DCM. Regression DCM (rDCM) converts the numerically costly parameter estimations in differential equations into an efficiently solvable Bayesian linear regression in the frequency domain, enabling computationally highly efficient model inversion. The rDCM can estimate the EC of networks with more than 66 brain regions and, later, sparse rDCM is presented, which extends the estimated brain regions to large brain-wide numbers [94,95,96].

3.2.2. Comparison

Although DCM and its variants provide valuable analysis for human connectomics, they are inherently linear and have not incorporated more realistic biophysical models, which limits their applications in fundamental neuroscience problems. Multiscale BNMs share some common properties with DCM, such as they both embody a generative form to characterize neuronal dynamics and need a forward model to transform the neuronal activity to a BOLD signal, as well as an optimization scheme to estimate network connections. However, DCMs use a relatively simple bilinear neuronal model, while BNMs utilize more realistic biophysical models. BNMs often use SC as a proxy for a backbone of EC, while DCMs do not need SC necessarily. Besides, DCMs are often limited to smaller networks, while BNMs are used to simulate large-scale whole brain networks [89]. It should be noted that the latest development for DCMs and BNMs, such as the neural mass model-based DCM and multivariate Ornstein-Uhlenbeck-driven BNM, causes a convergence for DCMs and BNMs [89,97,98]. The detailed comparison between multiscale BNMs and DCM can be found in Table 1.

4. Applications of Multiscale BNMs in Understanding Neuropsychiatric Disorders

Advancements in network science and graph theory boost our understanding of the topology and function of the brain. As an effective modeling method for characterizing the structure and function of the brain, multiscale BNMs have provided great expectations for understanding brain disorders at a causal mechanistic level [19]. Many neuroimaging studies have reported altered structural and functional connectivity in brain disorders, but the underlying neuronal mechanisms are still not clear. Multiscale BNMs link the mesoscopic neuronal population to macroscopic functional dynamics through SC and may serve as an effective tool to provide a mechanistic explanation of the neuronal origin of brain dysfunctions. The detailed applications of multiscale BNMs in brain disorders can be found in Table 2.

4.1. Parkinson’s Disease

Parkinson’s disease (PD) is the second most common neurodegenerative disorder in the elderly and is characterized by the degeneration of dopaminergic neurons in the substantia nigra pars compacta with resulting striatal dopaminergic deficiency [99]. The striatum is the largest structure of the basal ganglia and dopaminergic deficiency in it causes a dysfunction within the basal ganglia. Thus, previous studies suggested that the pathophysiology of PD was associated with the local basal ganglia area. With the development of neuroimaging, several studies have shown that large-scale functional networks are affected, leading to different FC patterns when compared to those found in normal controls [100]. Van Hartevelt et al. used whole-brain models to reveal significant recovery of structural network connectivity as a result of deep brain stimulation to alleviate the symptoms of PD [101]. Saenger et al. uncovered the whole-brain dynamics of the deep brain stimulation for PD through a multiscale BNM and offered important insights into the underlying large-scale effects of deep brain stimulation [102]. In fact, in the context of PD, there exist not only local abnormal neuronal oscillations in the basal ganglia, but also large-scale brain dysfunction. Figuring out how local neuronal oscillations effect the large-scale brain functional dynamics across different temporal and spatial scales may provide a pathological understanding for PD and exploit the precise biomarker for PD classification and prediction.

4.2. Alzheimer’s Disease

Alzheimer’s disease (AD) is the most prevalent form of dementia and has become a major concern in developed countries [53]. Various resting-state fMRI studies showed altered FC in AD [103,104]. Multiscale neuronal oscillations are thought to be potential mechanisms of AD’s pathological foundation. Several multiscale BNMs are developed to study the mechanisms of AD such as Demirtas et al., who used a Hopf Normal model as its node dynamics to construct a multiscale BNM and explore the underlying mechanisms of the monotonously decreased synchronization in the whole brain during disease progression, as well as the significantly decreased FC strengths in the brain regions with high global connectivity [53]. Van Nifterick et al. explored whether neuronal hyperexcitability and inhibitory interneuron dysfunction explain large-scale magnetoencephalography activity through a computational network model interacting with 78 neural mass models and their findings indicate that neuronal hyperactivity can lead to slow oscillations [105]. Except for understanding mechanisms, whole-brain models are also used as a test platform for designing optimal network controllers for AD. As in [106], a control theory framework for adjusting the exogenous input that reverts the pathological electroencephalographic activity in AD as a minimal energetic cost is presented based on a multiscale BNM. The BNM is constructed using an anatomical network coupled with Duffing oscillators; then, based on this high-dimensional nonlinear neural system, the state-dependent Riccati equation control method is extended. By considering nonlinearities in the BNM, they identified the top target locations for stimulation in AD and predicted that the patients with low average shortest path lengths and high global efficiency in anatomical networks were the most suitable candidates for the propagation of stimuli and success on the control task [106].

4.3. Schizophrenia

Schizophrenia is a group of brain disorders thought to involve large-scale dysconnectivity as well as alterations in local cortical excitation-inhibition physiology [107,108,109,110]. Computational models that bridge from the level of synapses and neurons to that of spatiotemporal dynamics in distributed brain systems are very suited to explore this disease [37]. Cabral et al. used SC matrices combined with asynchronous attractors to analyze a multiscale model to examine how SC impacts functional dysconnectivity in Schizophrenia. Their studies suggest that FC is shaped by more than SC and that synaptic disruptions may play a profound role in disease-related dysconnectivity [111]. Yang et al. studied the large-scale impact of alterations in the excitation-inhibition ratio in cortical circuits on resting-state fMRI biomarkers in Schizophrenia based on a multiscale BNM and found that increased effective strength of connectivity at either the local or the long-range level would result in an elevated excitation-inhibition ratio that can capture the elevated local and global neural variability in Schizophrenia [112,113,114,115]. Anticevic et al. used multiscale models to identify the role for glutamate in establishing large-scale functional patterns related with Schizophrenia [116]. Zhang et al. used generative BNM to identify biological mechanisms of altered structural brain connectivity and suggested that spatial constraints and local topological structures are two interrelated mechanisms contributing to altered connectomes in Schizophrenia [117].
Table 2. Applications of multiscale BNMs in some brain disorders.
Table 2. Applications of multiscale BNMs in some brain disorders.
ReferencesNode DynamicsParcellationModel FittingSC DataDiseases
Saenger et al. [102]Normal form of Hopf AAL atlas
90 regions
Static and dynamical FC, fMRI dataDTIParkinson
Van Nifterick et al. [105]NMMsAAL atlas
78 regions
Power spectrum, MEG dataDTIAlzheimer
Sanchez-Rodriguez et al. [106]Duffing oscillatorDK atlas
78 regions
Spectral measures, EEG dataDWI,
T1-weighted MRIs
Alzheimer
Cabral et al. [111]Mean-field approximation of IF models *AAL atlas
90 regions
Static FC
fMRI data
DTISchizophrenia
Yang et al. [112,113]
Anticevic et al. [114], Cole et al. [115]
Mean-field models66 regionsStatic FC
fMRI data
DWISchizophrenia
* IF models: Integrate-and-Fire models.

5. Discussion

5.1. Limitations

Multiscale BNMs have proven to be an effective tool to simulate brain dynamics and understand the mechanisms of brain disorders. The modeling pipeline mainly includes the estimation of structural matrices, selection of node dynamics, and model fitting to empirical data to find optimal model parameters. Thus far, there is no common rule for selecting a particular type of connectivity or mathematical equation when constructing multiscale models. Additionally, the specific models are related to the problem at hand, the background of the researchers, and the types of available data [118]. So, there still exist some challenges to construct optimal multiscale BNMs when confronted with specific problems. Knowing the brain parcellation before estimating the SC is of great importance because the level of parcellation granularity may strongly impact the topological properties of SC [27], meanwhile the multiscale BNMs are sensitive to the underlying network structure [119]. As mentioned above, the currently popular parcellations of brain regions include the Desikan-Killiany atlas or Hagmann atlas with 66 cortical regions and the automated anatomical labelling atlas with 116 regions [27,120,121]. Proix et al. suggested that under certain sets of assumptions, an atlas with approximately 140 brain regions may produce a good agreement with empirical data [122]. However, the proper choice of parcellation depends on many factors and the optimal parcellation of brain regions is not currently clear [123]. Fortunately, multiscale BNMs provide a test platform to compare the effects of different parcellation atlases on simulated brain dynamics; thus, in future modeling investigations, the effects of brain parcellation should be compared in advance.
The homogeneous node dynamics and coupling strength between nodes are used in most multiscale BNMs, but studies suggested that the hierarchical heterogeneity of local circuit parameters improves the fit of empirical FC compared with the homogeneous model or SC [37]. Heterogeneity in node dynamics can be introduced by assigning multiple oscillatory frequencies [32], while heterogeneity in SC is seldom explored. When constructing connectomes, a previous tractography analysis was performed to indicate the existence or absence of inter-areal connections. Typically, it is performed by selecting a threshold such that all the edges smaller than the threshold are set to zero or are otherwise set to one. However, determining an appropriate threshold is not straightforward and is often arbitrary, which makes such a binarized version an oversimplification not capable of reflecting the known heterogeneous distribution of fiber connection densities [124]. With the advancement in tractography techniques, such as tractogram filtering or microstructure-informed tractogram reconstruction, weighted connectomes are increasingly adopted as they can encode more biological features and offer more biological meaningful information [125,126,127,128,129]. On the other hand, currently one framework named dynamical neural model can estimate the SC using direct parameterization. Without prior SC information, one can obtain the SC using model inversion from the fMRI data. For example, Singh et al. presented the Mesoscale Individualized Neurodynamic (MINDy) modeling method to fit nonlinear dynamical neural models directly to fMRI data [130] and Li et al. used a Genetic algorithm to constitute a Multiscale Neural Model Inversion framework to infer the network connection [131]. Additionally, Sip et al. utilized the framework of amortized variational inference (variational autoencoders) to infer both the node dynamics and spatial heterogeneity [132]. All of these methods provide new solutions for SC heterogeneity.
Most multiscale BNMs utilize noise as a node input and are motivated by experimental findings that indicated that the envelope of the amplitudes of alpha and beta frequency oscillations of the electrophysiological signals exhibit correlation patterns similar to those fMRI signals at slow time scales [133,134], Schirner et al. presented a hybrid multiscale BNM that uses EEG as a source activity to be input into each node in the BNM [62]. Based on Schirner’s hybrid BNM, Rabuffo et al. presented neuronal cascades as the underlying neuronal underpinning of the whole-brain resting-state functional dynamics [135]. The hybrid BNM bridges the gap between the electrophysiology and neuroimages, which is promising in understanding neuronal mechanisms underlying fMRI dynamics. However, the fusion of the EEG and fMRI is in a simple way and, as we know, there are several ways to integrate EEG and fMRI. A more plausible fusion of EEG and fMRI should be introduced to multiscale BNMs [136,137,138].

5.2. Future Directions

Multiscale BNMs have proven an enormous potential for exploring neuronal mechanisms underlying large-scale brain functions. In neuroscience, understanding principles of the brain is not our ultimate aim, instead we aim to regulate the brain to enhance cognitive functions or improve neuropsychiatric diseases. Thus, models are not only for understanding information processing mechanisms of the brain, but also for exploring optimal brain stimulation techniques, including deep brain stimulation, transcranial direct current stimulation, intracranial cortical stimulation, and transcranial magnetic stimulation et al. [139,140,141,142]. To do this, multiscale BNMs should be combined with control theory to provide an in silico testbed for exploring and evaluating the efficacy of control strategies. In the context of control theory, the averaged firing rate or membrane potential of BNMs are considered as state variables of the system and the forward function (e.g., hemodynamical response function) is thought of as a measured function to map the state variables into the output of the system. The combination of control theory and neural networks produces network control theory; within this framework, the control properties of networks are very important for deciding whether or not certain networks can be controlled. Some metrics including average controllability, modal controllability, and so on should be estimated preceding the control application [143,144,145,146,147]. However, we have to note that the controllability metrics are based on the linearization of nonlinear systems, while the BNMs are nonlinear system that cannot be quantified by controllability. Thus, in the future, the BNMs should be improved to be more appropriate for network control theory through emerging neural technology and advances in numerical optimization [148] such as the system identification method presented in [149]. System identification methods have been successfully applied in areas where the fundamental mechanistic principles across variables are known [150,151]. However, in the brain, the high-dimensional measurements potentially affected by some hidden variables. Considering this, the combination of BNMs and system identification is likely to become a promising research area in the future.

6. Conclusions

With the rapid development of advanced neuroimaging, models from the whole-brain level have attracted more and more attention. The connectome is one of those results boosted by advanced neuroimaging techniques. Within this framework, the structure and function of the brain is described by SC and FC derived from neuroimaging data. Initial descriptive whole-brain models only consider SC or FC independently and statically and therefore cannot capture the dynamical properties of brain functions or the relationship between SC and FC. Recently, multiscale BNMs, also named communication models, generative models, mechanistic models, or whole-brain computational models in other literature, are gaining more attention. Although their applications are not widely developed, they indeed manifest great expectations for understanding the mechanisms of large-scale brain dynamics. In this brief review, we introduced a pipeline for implementing a multiscale BNM including the estimation of connections, choice of node dynamics, and model fitting. Furthermore, we put an emphasis on a machine learning (deep learning) application on the modeling fitting. Actually, some traditional machine learning algorithms, such as the Genetic Algorithm, are also applied in the parameter optimization process of multiscale BNMs and have exhibited excellent effects. We believe that in the development of BNMs, more and more intelligent algorithms will be applied to the parameter optimization of models, which will produce more biophysically meaningful models. At last, we compare multiscale BNMs with some popular brain models and present future developments. We think that combining the multiscale BNMs with control theory will provide a better potential for understanding cognitive and brain diseases as well as developing more effective and personalized treatments for neurological and psychiatric diseases.

Author Contributions

Conceptualization, M.L.; methodology, software, validation, Z.G. (Zhaohua Guo); Z.G. (Zicheng Gao); Y.C. and J.F.; formal analysis, investigation, resources, data curation, Z.G. (Zhaohua Guo); writing—original draft preparation, M.L.; writing—review and editing, visualization, M.L.; supervision, project administration, funding acquisition M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (grant number 61501330, 62171312), and by the Tianjin Municipal Education Commission scientific research project (grant number 2020KJ114).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.

References

  1. Glomb, K.; Ponce-Alvarez, A.; Gilson, M.; Ritter, P.; Deco, G. Resting state networks in empirical and simulated dynamic functional connectivity. Neuroimage 2017, 159, 388–402. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Deco, G.; Jirsa, V.K.; Robinson, P.A.; Breakspear, M.; Friston, K. The dynamic brain: From spiking neurons to neural masses and cortical fields. PLoS Comput. Biol. 2008, 4, e1000092. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Hodgkin, A.L.; Huxley, F. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 1952, 117, 500–544. [Google Scholar] [CrossRef] [PubMed]
  4. Izhikevich, E.M. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting; The MIT Press: Cambridge, MA, USA, 2007. [Google Scholar]
  5. Morris, C.; Lecar, H. Voltage oscillations in the barnacle giant muscle fiber. Biophys. J. 1981, 35, 193–213. [Google Scholar] [CrossRef] [Green Version]
  6. Terman, D.; Rubin, J.E.; Yew, A.C.; Wilson, C.J. Activity patterns in a model for the subthalamopallidal network of the basal ganglia. J. Neurosci. 2002, 22, 2963–2976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. So, R.Q.; Kent, A.R.; Grill, W.M. Relative contributions of local cell and passing fiber activation and lesioning: A computational modeling study. J. Comput. Neurosci. 2012, 32, 499–519. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Schiff, S.J. Towards model-based control of Parkinson’s disease. Philos. Trans. A Math Phys. Eng. Sci. 2010, 368, 2269–2308. [Google Scholar] [CrossRef] [Green Version]
  9. Lu, M.; Wei, X.; Che, Y.; Wang, J.; Loparo, K.A. Application of reinforcement learning to deep brain stimulation in a computational model of Parkinson’s disease. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 339–349. [Google Scholar] [CrossRef]
  10. Liu, C.; Wang, J.; Li, H.; Lu, M.; Deng, B.; Yu, H.; Wei, X.; Fietkiewicz, C.; Loparo, K.A. Closed-loop modulation of the pathological disorders of the basal ganglia network. IEEE Trans. Neural Netw. Learn Syst. 2017, 28, 371–382. [Google Scholar] [CrossRef]
  11. Depannemaecker, D.; Destexhe, A.; Jirsa, V.; Bernard, C. Modeling seizures: From single neurons to networks. Seizure 2021, 90, 4–8. [Google Scholar] [CrossRef]
  12. Boustani, E.I.; Destexhe, A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput. 2009, 21, 46–100. [Google Scholar] [CrossRef] [PubMed]
  13. Wilson, H.R.; Cowan, J.D. Excitatory and inhibitory interactions in localized populations of model neuron. Biophys. J. 1972, 12, 1–24. [Google Scholar] [CrossRef] [Green Version]
  14. Jansen, B.H.; Rit, V.G. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol. Cybern. 1995, 73, 357–366. [Google Scholar] [CrossRef] [PubMed]
  15. Kuramoto, Y.; Nishikawa, I. Statistical macrodynamics of large dynamical systems. Case of a phase transition in oscillator communities. J. Stat. Phys. 1987, 49, 569–605. [Google Scholar] [CrossRef]
  16. Breakspear, M. Dynamic models of large-scale brain activity. Nat. Neurosci. 2017, 20, 340–352. [Google Scholar] [CrossRef] [PubMed]
  17. Sanchez-Todo, R.; Salvador, R.; Santarnecchi, E.; Wendling, F.; Deco, G.; Ruffini, G. Personalization of hybrid brain models from neuroimaging and electrophysiology data. bioRxiv 2018, 461350. [Google Scholar] [CrossRef] [Green Version]
  18. Bullmore, E.; Sporns, O. Complex brain networks: Graph theoretical analysis of structure and functional systems. Nat. Rev. Neurosci. 2009, 10, 186–198. [Google Scholar] [CrossRef]
  19. Deco, G.; Kringelbach, M.L. Great expectations: Using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 2014, 84, 892–905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Hutchison, R.M.; Womelsdorf, T.; Allen, E.A.; Bandettini, P.A.; Calhoun, V.D.; Corbetta, M.; Gonzalez-Castillo, J. Dynamic functional connectivity: Promise, issues, and interpretations. NeuroImage 2013, 80, 360–378. [Google Scholar] [CrossRef] [Green Version]
  21. Heitmann, S.; Breakspear, M. Putting the “dynamic” back into dynamic functional connectivity. Net. Neurosci. 2018, 2, 150–174. [Google Scholar] [CrossRef]
  22. Giacopelli, G.; Tegolo, D.; Spera, E.; Migliore, M. On the structural connectivity of large-scale models of brain networks at cellular level. Sci. Rep. 2021, 11, 4345. [Google Scholar] [CrossRef] [PubMed]
  23. Markov, N.T.; Ercsey-Ravasz, M.M.; Ribeiro Gomes, A.R.; Lamy, C.; Magrou, L.; Vezoli, J.; Misery, P.; Falchier, A.; Quilodran, R.; Gariel, M.A. A weighted and directed interareal connectivity matrix for macaque cerebral cortex. Cereb. Cortex 2014, 24, 17–36. [Google Scholar] [CrossRef] [PubMed]
  24. Deco, G.; Ponce-Alvarez, A.; Mantini, D.; Romani, G.L.; Hagmann, P.; Corbetta, M. Resting-state functional connectivity emerges from structurally and dynamically shaped slow linear fluctuations. J. Neurosci. 2013, 33, 11239–11252. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Deco, G.; Jirsa, V.K. Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. J. Neurosci. 2012, 32, 3366–3375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Hagmann, P.; Cammoun, L.; Gigandet, X.; Meuli, R.; Honey, C.J.; Wedeen, V.J.; Sporns, O. Mapping the structural core of human cerebral cortex. PLoS Biol. 2008, 6, e159. [Google Scholar] [CrossRef] [PubMed]
  27. Zalesky, A.; Fornito, A.; Harding, I.H.; Cocchi, L.; Yucel, M.; Pantelis, C.; Bullmore, E.T. Whole-brain anatomical networks: Does the choice of nodes matter? NeuroImage 2020, 50, 970–983. [Google Scholar] [CrossRef]
  28. Eickhoff, S.B.; Constable, R.T.; Yeo, B.T.T. Topographic organization of the cerebral cortex and brain cartography. Neuroimage 2018, 170, 332–347. [Google Scholar] [CrossRef] [Green Version]
  29. Popovych, O.V.; Manos, T.; Hoffstaedter, F.; Eickhoff, S.B. What can computational models contribute to the neuroimaging data analytics. Front. Syst. Neurosci. 2019, 12, 68. [Google Scholar] [CrossRef] [Green Version]
  30. Yeh, C.H.; Jones, D.K.; Liang, X.; Descoteaux, M.; Connelly, A. Mapping structural connectivity using diffusion MRI: Challenges and opportunities. J. Magn. Reson. Imaging 2021, 53, 1666–1682. [Google Scholar] [CrossRef]
  31. Eickhoff, S.B.; Yeo, B.T.T.; Genon, S. Imaging-based parcellations of the human brain. Nat. Rev. Neurosci. 2018, 19, 672–686. [Google Scholar] [CrossRef]
  32. Pathak, A.; Roy, D.; Banerjee, A. Whole-brain network models: From physics to bedside. Front. Comput. Neurosci. 2022, 16, 866517. [Google Scholar] [CrossRef] [PubMed]
  33. Nowinski, W.L. Evolution of human brain atlases in terms of content, applications, functionality, and availability. Neuroinformatics 2021, 19, 1–22. [Google Scholar] [CrossRef] [PubMed]
  34. Kale, P.; Zalesky, A.; Gollo, L.L. Estimating the impact of structural directionality: How reliable are undirected connectomes? Netw. Neurosci. 2018, 2, 259–284. [Google Scholar] [CrossRef] [PubMed]
  35. Friston, K.J. Functional and effective connectivity: A review. Brain Connect 2011, 1, 13–36. [Google Scholar] [CrossRef]
  36. Gilson, M.; Moreno-Bote, R.; Ponce-Alvarez, A.; Ritter, P.; Deco, G. Estimation of directed effective connectivity from fMRI functional connectivity hints at asymmetries of cortical connectome. PLoS Comput. Biol. 2016, 12, e1004762. [Google Scholar] [CrossRef] [Green Version]
  37. Murray, J.D.; Demirtas, M.; Anticevic, A. Biophysical modeling of large-scale brain dynamics and applications for computational psychiatry. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2018, 3, 777–787. [Google Scholar] [CrossRef]
  38. Modha, D.S.; Singh, R. Network architecture of the long-distance pathways in the macaque brain. Proc. Natl. Acad. Sci. USA 2010, 107, 13485–13490. [Google Scholar] [CrossRef] [Green Version]
  39. Forrester, M.; Crofts, J.J.; Sotiropoulos, S.N.; Coombes, S.; O’Dea, R.D. The role of node dynamics is shaping emergent functional connectivity patterns in the brain. Netw. Neurosci. 2020, 4, 467–483. [Google Scholar] [CrossRef] [Green Version]
  40. Stam, C.J.; van Straaten, E.C.; Van Dellen, E.; Tewarie, P.; Gong, G.; Hillebrand, A.; Meier, J.; Van Mieghem, P. The relation between structural and functional connectivity patterns in complex brain networks. Int. J. Psychophysiol. 2016, 103, 149–160. [Google Scholar] [CrossRef]
  41. Hlinka, J.; Coombes, S. Using computational models to relate structural and functional Brain Connectivity. Eur. J. Neurosci. 2012, 36, 2137–2145. [Google Scholar] [CrossRef]
  42. Honey, C.J.; Kötter, R.; Breakspear, M.; Sporns, O. Network structure of cerebral cortex shapes functional connectivity on multiple time scale. Proc. Natl. Acad. Sci. USA 2007, 104, 10240–10245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Breakspear, M.; Terry, J.R.; Friston, K.J. Modulation of excitatory synaptic coupling facilitates synchronization and complex dynamics in a biophysical model of neuronal dynamics. Network 2003, 14, 703–732. [Google Scholar] [CrossRef] [PubMed]
  44. Ghosh, A.; Rho, Y.; McIntosh, A.R.; Kötter, R.; Jirsa, V.K. Noise during rest enables the exploration of the brain’s dynamic repertoire. PLoS Comput. Biol. 2008, 4, e1000196. [Google Scholar] [CrossRef] [PubMed]
  45. Deco, G.; Jirsa, V.K.; McIntosh, A.R. Emerging concepts for the dynamical organization of resting-state activity in the brain. Nat. Rev. Neurosci. 2011, 12, 43–56. [Google Scholar] [CrossRef]
  46. Anticevic, A.; Murray, J.D.; Narch, D.M. Bridging levels of understanding in schizophrenia through computational modeling. ClinPsychol. Sci. 2015, 3, 433–459. [Google Scholar] [CrossRef] [Green Version]
  47. Murray, J.D.; Wang, X.J. Cortical circuit models in psychiatry: Linking disrupted excitation-inhibition balance to cognitive deficits associated with schizophrenia. In Computational Psychiatry: Mathematical Modeling of Mental Illness; Academin Press: London, UK, 2017; pp. 3–25. [Google Scholar]
  48. Deco, G.; Jirsa, V.; McIntosh, A.R.; Sporn, O.; Kötter, R. Key role of coupling, delay, and noise in resting brain fluctuations. Proc. Natl. Acad. Sci. USA 2009, 106, 10302–10307. [Google Scholar] [CrossRef] [Green Version]
  49. Coronel-Oliveros, C.; Castro, S.; Cofre, R.; Orio, P. Structural features of the human connectome that facilitate the switching of brain dynamics via noradrenergic neuromodulation. Front. Comput. Neurosci. 2021, 15, 687075. [Google Scholar] [CrossRef]
  50. Hellyer, P.J.; Scott, G.; Shanahan, M.; Sharp, D.J.; Leech, R. Cognitive flexibility through metastable neural dynamics is disrupted by damage to the structural connectome. J. Neurosci. 2015, 35, 9050–9063. [Google Scholar] [CrossRef] [Green Version]
  51. Ponce-Alvarez, A.; Deco, G.; Hagmann, P.; Romani, G.L.; Mantini, D.; Corbetta, M. Resting-state temporal synchronization networks emerge from connectivity topology and heterogeneity. PLoS Comput. Biol. 2015, 11, e1004100. [Google Scholar] [CrossRef] [Green Version]
  52. Bettinardi, R.G.; Deco, G.; Karlaftis, V.M.; Van Hartevelt, T.J.; Fernandes, H.M.; Kourtzi, Z.; Kringelbach, M.L.; Zamora-López, G. How structure sculpts function: Unveiling the contribution of anatomical connectivity to the brain’s spontaneous correlation structure. Chaos 2017, 27, 047409. [Google Scholar] [CrossRef]
  53. Demirtas, M.; Falcon, C.; Tucholka, A.; Gispert, J.D.; Molinuevo, J.L.; Deco, G. A whole-brain computational modeling approach to explain the alterations in resting-state functional connectivity during progression of Alzheimer’s disease. Neuroimage Clin. 2017, 16, 343–354. [Google Scholar] [CrossRef] [PubMed]
  54. López-González, A.; Panda, R.; Ponce-Alvarez, A.; Zamora-López, G.; Escrichs, A.; Martial, C.; Thibaut, A.; Gosseries, O.; Kringelbach, M.L.; Annen, J.; et al. Loss of consciousness reduces the stability of brain hubs and the heterogeneity of brain dynamics. Commun. Biol. 2021, 4, 1–15. [Google Scholar] [CrossRef] [PubMed]
  55. Lord, L.D.; Stevner, A.B.; Deco, G.; Kringelbach, M.L. Understanding principles of integration and segregation using whole-brain computational connectomics: Implications for neuropsychiatric disorders. Philos. Trans. R Soc. A Math Phys. Eng. Sci. 2017, 375, 20160283. [Google Scholar] [CrossRef] [Green Version]
  56. Schmidt, R.; LaFleur, K.J.R.; De Reus, M.A.; van den Berg, L.H.; van den Heuvel, M.P. Kuramoto model simulation of neural hubs and dynamic synchrony in the human cerebral connectome. BMC Neurosci. 2015, 16, 54. [Google Scholar] [CrossRef] [Green Version]
  57. Breakspear, M.; Heitmann, S.; Daffertshofer, A. Generative models of cortical oscillations: Neurobiological implications of the kuramoto model. Front. Hum. Neurosci. 2010, 4, 190. [Google Scholar] [CrossRef] [Green Version]
  58. Sanz Perl, Y.; Pallavicini, C.; Pérez lpiña, I.; Demertzi, A.; Bonhomme, V.; Martial, C.; Panda, R.; Annen, J.; lbañez, A.; Kringelbach, M.; et al. Perturbations in dynamical models of whole-brain activity dissociate between the level and stability of consciousness. PLoS Comput. Biol. 2021, 17, e1009139. [Google Scholar] [CrossRef] [PubMed]
  59. Glasser, M.F.; Goyal, M.S.; Preuss, T.M.; Raichle, M.E.; Van Essen, D.C. Trends and properties of human cerebral cortex: Correlations with cortical myelin content. Neuroimage 2014, 93, 165–175. [Google Scholar] [CrossRef] [Green Version]
  60. Deco, G.; Cabral, J.; Woolrich, M.W.; Stevner, A.B.; Van Hartevelt, T.J.; Kringelbach, M.L. Single or multiple frequency generators in ongoing brain activity: A mechanistic whole-brain model of empirical MEG data. Neuroimage 2017, 152, 538–550. [Google Scholar] [CrossRef]
  61. Demirtaş, M.; Burt, J.B.; Helmer, M.; Ji, J.L.; Adkinson, B.D.; Glasser, M.F.; Van Essen, D.C.; Sotiropoulos, S.N.; Anticevic, A.; Murray, J.D. Hierarchical heterogeneity across human cortex shapes large-scale neural dynamics. Neuron 2019, 101, 1181–1194.e13. [Google Scholar] [CrossRef] [Green Version]
  62. Schirner, M.; McIntosh, A.R.; Jirsa, V.; Deco, G.; Ritter, P. Inferring multi-scale neural mechanisms with brain network modelling. Elife 2018, 8, e28927. [Google Scholar] [CrossRef]
  63. Lee, W.H.; Frangou, S. Emergence of metastable dynamics in functional brain organization via spontaneous fMRI signal and whole-brain computational modeling. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2017, 2017, 4471–4474. [Google Scholar]
  64. Cabral, J.; Kringelbach, M.L.; Deco, G. Functional graph alterations in schizophrenia: A result from a global anatomic decoupling? Pharmacopsychiatry 2012, 45, S57–S64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Kaboodvand, N.; van den Heuvel, M.P.; Fransson, P. Adaptive frequency-based modeling of whole-brain oscillations: Predicting regional vulnerability and hazardousness rates. Netw. Neurosci. 2019, 3, 1094–1120. [Google Scholar] [CrossRef] [PubMed]
  66. Jobst, B.M.; Hindriks, R.; Laufs, H.; Tagliazucchi, E.; Hahn, G.; Ponce-Alvarez, A.; Stevner, A.B.A.; Kringelbach, M.L.; Deco, G. Increased stability and breakdown of brain effective connectivity during slow-brain computational modeling. Sci. Rep. 2017, 7, 4634. [Google Scholar] [CrossRef] [Green Version]
  67. Wischnewski, K.J.; Eickhoff, S.B.; Jirsa, V.K.; Popovych, O.V. Towards an efficient validation of dynamical whole-brain models. Sci. Rep. 2022, 12, 4331. [Google Scholar] [CrossRef]
  68. Cabral, J.; Kringelbach, M.L.; Deco, G. Exploring the network dynamics underlying brain activity during rest. Prog. Neurobiol. 2014, 114, 102–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Cabral, J.; Hugues, E.; Sporns, O.; Deco, G. Role of local network oscillations in resting-state functional connectivity. NeuroImage 2011, 57, 130–139. [Google Scholar] [CrossRef]
  70. Deco, G.; Jirsa, V.K.; McIntosh, A.R. Resting brains never rest: Computational insights into potential cognitive architectures. Trends Neurosci. 2013, 36, 268–274. [Google Scholar] [CrossRef] [Green Version]
  71. Shakil, S.; Lee, C.H.; Keilholz, S.D. Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. NeuroImage 2016, 133, 111–128. [Google Scholar] [CrossRef] [Green Version]
  72. Kashyap, A.; Keilholz, S. Dynamic properties of simulated brain network models and empirical resting-state data. Netw. Neurosci. 2019, 3, 405–426. [Google Scholar] [CrossRef]
  73. Allen, E.A.; Damaraju, E.; Plis, S.M.; Erhardt, E.B.; Eichele, T.; Calhoun, V.D. Tracking whole-Brain Connectivity dynamics in the resting state. Cereb. Cortex 2014, 24, 663–676. [Google Scholar] [CrossRef] [PubMed]
  74. Cabral, J.; Vidaurre, D.; Marques, P.; Magalhães, R.; Silva Moreira, P.; Miguel Soares, J.; Deco, G.; Sousa, N.; Kringelbach, M. Cognitive performance in health older adults relates to spontaneous switching between states of functional connectivity during rest. Sci. Rep. 2017, 7, 5135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  75. Deco, G.; Kringelbach, M.L.; Jirsa, V.K.; Ritter, P. The dynamics of resting fluctuations in the brain: Metastability and its dynamical cortical core. Sci. Rep. 2017, 7, 3095. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  76. Sakoğlu, U.; Pearlson, G.D.; Kiehl, K.A.; Wang, Y.M.; Michael, A.M.; Calhoun, V.D. A method for evaluating dynamic functional network connectivity and task-modulation: Application to schizophrenia. MAGMA 2010, 23, 351–366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Wang, S.; Wen, H.; Hu, X.; Xie, P.; Qiu, S.; Qian, Y.; Qiu, J.; He, H. Transition and dynamics reconfiguration of whole-brain network in major depressive disorder. Mol. Neurobiol. 2020, 57, 4031–4044. [Google Scholar] [CrossRef]
  78. Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling. Neuroimage 2003, 19, 1273–1302. [Google Scholar] [CrossRef]
  79. Razi, A.; Seghier, M.L.; Zhou, Y.; McColgan, P.; Zeidman, P.; Park, H.J.; Sporns, O.; Rees, G.; Friston, K.J. Large-scale DCMs for resting-state fMRI. Netw. Neurosci. 2017, 1, 222–241. [Google Scholar] [CrossRef]
  80. Kashyap, A.; Keilholz, S. Brain network constraints and recurrent neural networks reproduce unique trajectories and state transitions seen over the span of minutes in resting-state fMRI. Netw. Neurosci. 2020, 4, 448–466. [Google Scholar] [CrossRef] [Green Version]
  81. Kashyap, A.; Plis, S.; Schirner, M.; Ritter, P.; Keilholz, S. A deep learning approach to estimating initial conditions of brain network models in reference to measured fMRI data. bioRxiv 2021. [Google Scholar] [CrossRef]
  82. Chen, R.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural ordinary differential equations. arXiv 2018, arXiv:1806. 07366v5. [Google Scholar]
  83. Bassett, D.S.; Xia, C.H.; Satterthwaite, T.D. Understanding the emergence of neuropsychiatric disorders with network neuroscience. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 2018, 3, 742–753. [Google Scholar] [CrossRef] [PubMed]
  84. Bassett, D.S.; Zurn, P.; Gold, J.I. On the nature and use of models in network neuroscience. Nat. Rev. Neurosci. 2018. Epub ahead of print. [Google Scholar] [CrossRef] [PubMed]
  85. Bassett, D.S.; Sporns, O. Network neuroscience. Nat. Neurosci. 2017, 20, 353–364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  86. Valdes-Sasa, P.A.; Roebroeck, A.; Daunizeau, J.; Friston, K. Effective connectivity: Influence, causality and biophysical modeling. Neuroimage 2011, 58, 339–361. [Google Scholar] [CrossRef] [PubMed]
  87. Friston, K.J.; Litvka, V.; Oswal, A.; Razi, A.; Stephan, K.E.; van Wijk, B.C.M.; Ziegler, G.; Zeidman, P. Bayesian model reduction and empirical bayes for group (DCM) studies. Neuroimage 2016, 128, 413–431. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Friston, K.J.; Mattout, J.; Trujillo-Barreto, N.; Ashburner, J.; Penny, W. Variational free energy and the Laplace approximation. Neuroimage 2007, 34, 220–234. [Google Scholar] [CrossRef] [PubMed]
  89. Li, G.; Yap, P.T. From descriptive connectome to mechanistic connectome: Generative modeling in functional magnetic resonance imaging analysis. Front. Hum. Neurosci. 2022, 16, 940842. [Google Scholar] [CrossRef]
  90. Friston, K.; Moran, R.; Seth, A.K. Analysing connectivity with Granger causality and dynamic causal modelling. Curr. Opin. Neurobiol. 2013, 23, 172–178. [Google Scholar] [CrossRef] [Green Version]
  91. Buxton, R.; Wong, E.; Frank, L. Dynamics of blood flow and oxygenation changes during brain activation: The balloon model. Magn. Reson. Med. 1998, 39, 855–864. [Google Scholar] [CrossRef]
  92. Stephan, K.; Kasper, L.; Harrison, L.; Daunizeau, J.; den Ouden, H.; Breakspear, M.; Friston, K. Nonlinear dynamic causal models for fMRI. Neuroimage 2008, 42, 649–662. [Google Scholar] [CrossRef] [Green Version]
  93. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006; Volume 12, p. 105. [Google Scholar]
  94. Frässel, S.; Lomakina, E.I.; Kasper, L.; Manjaly, Z.M.; Leff, A.; Pruessmann, K.P. A generative model of whole-brain effective connectivity. Neuroimage 2018, 179, 505–529. [Google Scholar] [CrossRef] [PubMed]
  95. Frässel, S.; Manjaly, Z.M.; Do, C.T.; Kasper, L.; Pruessmann, K.P.; Stephan, K.E. Whole-brain estimates of directed connectivity for human connectomics. NeuroImage 2021, 225, 117491. [Google Scholar] [CrossRef] [PubMed]
  96. Frässel, S.; Lomakina, E.I.; Razi, A.; Friston, K.J.; Buhmann, J.M.; Stephan, K.E. Regression DCM for fMRI. NeuroImage 2017, 155, 406–421. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  97. Gilson, M.; Deco, G.; Friston, K.J.; Hagmann, P.; Mantini, D.; Betti, V.; Romani, G.L.; Corbetta, M. Effective connectivity inferred from fMRI transition dynamics during movie viewing points to a balanced reconfiguration of cortical interactions. Neuroimage 2017, 180, 534–546. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  98. Gilson, M.; Zamora-López, G.; Pallarés, V.; Adhikari, M.H.; Senden, M.; Campo, A.T.; Mantini, D.; Corbetta, M.; Deco, G.; Insabato, A. Model-based whole-brain effective connectivity to study distributed cognition in health and disease. Netw. Neurosci. 2020, 4, 338–373. [Google Scholar] [CrossRef] [Green Version]
  99. Lebouvier, T.; Chaumette, T.; Paillusson, S.; Duyckaerts, C.; Bruley des Varannes, S.; Neunlist, M.; Derkinderen, P. The second brain and Parkinson’s disease. Eur. J. Neurosci. 2009, 30, 735–741. [Google Scholar] [CrossRef]
  100. Van Eimeren, T.; Monchi, O.; Ballanger, B.; Strafella, A.P. Dysfunction of the default mode network in Parkinson’s disease. Arch. Neurol. 2009, 66, 877–883. [Google Scholar] [CrossRef] [Green Version]
  101. Van Hartevelt, T.J.; Cabral, J.; Deco, G.; Moller, A.; Green, A.L.; Aziz, T.Z.; Kringelbach, M.L. Neural plasticity in human Brain Connectivity: The effects of long term deep brain stimulation of the subthalamic nucleus in Parkinson’s disease. PLoS ONE 2014, 9, e86496. [Google Scholar] [CrossRef]
  102. Saenger, V.M.; Kahan, J.; Foltynie, T.; Friston, K.; Aziz, T.Z.; Green, A.L.; Hartevelt, T.J.; Cabral, J.; Stevner, A.B.A.; Fernandes, H.M.; et al. Uncovering the underlying mechanisms and whole-brain dynamics of deep brain stimulation for Parkinson’s disease. Sci. Rep. 2017, 7, 1–14. [Google Scholar] [CrossRef]
  103. Brier, M.R.; Thomas, J.B.; Ances, B.M. Network dysfunction in Alzheimer’s disease: Refining the disconnection hypothesis. Brain Connect 2014, 4, 299–311. [Google Scholar] [CrossRef] [Green Version]
  104. Dennis, E.L.; Thompson, P.M. Functional Brain Connectivity using fMRI in aging and Alzheimer’s disease. Neuropsychol. Rev. 2014, 24, 49–62. [Google Scholar] [CrossRef] [PubMed]
  105. Van Nifterick, A.M.; Gouw, A.A.; van Kesteren, R.E.; Scheltens, P.; Stam, C.J.; de Haan, W. A multiscale brain network model links Alzheimer’s disease-mediated neuronal hyperactivity to large-scale oscillatory slowing. Alzheimers Res. Ther. 2022, 14, 101. [Google Scholar] [CrossRef] [PubMed]
  106. Sanchez-Rodriguez, L.M.; Iturria-Medina, Y.; Baines, E.A.; Mallo, S.C.; Dousty, M.; Sotero, R.C. Design of optimal nonlinear network controllers for Alzheimer’s disease. PLoS Comput. Biol. 2018, 14, e1006136. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  107. Stephan, K.E.; Baldeweg, T.; Friston, K.J. Synaptic plasticity and disconnection in schizophrenia. Biol. Psychiatry 2006, 59, 929–939. [Google Scholar] [CrossRef]
  108. Uhlhaas, P.J. Dysconnectivity, large-scale networks and neuronal dynamics in schizophrenia. Curr. Opin. Neurobiol. 2013, 23, 283–290. [Google Scholar] [CrossRef]
  109. Lewis, D.A.; Hashimoto, T.; Volk, D.W. Cortical inhibitory neurons and schizophrenia. Nat. Rev. Neurosci. 2005, 6, 312–324. [Google Scholar] [CrossRef]
  110. Nakazawa, K.; Zsiros, V.; Jiang, Z.; Nakao, K.; Kolata, S.; Zhang, S.; Belforte, J.E. GABAergic interneuron origin of schizophrenia pathophysiology. Neuropharmacology 2012, 62, 1574–1583. [Google Scholar] [CrossRef] [Green Version]
  111. Cabral, J.; Fernandes, H.M.; Van Hartevelt, T.J.; James, A.C.; Kringelbach, M.L.; Deco, G. Structural connectivity in schizophrenia and its impact on the dynamics of spontaneous functional networks. Chaos 2013, 23, 046111. [Google Scholar] [CrossRef] [Green Version]
  112. Yang, G.J.; Murray, J.D.; Repovs, G.; Cole, M.W.; Savic, A.; Glasser, M.F.; Pittenger, C.; Krystal, J.H.; Wang, X.; Pearlson, G.D.; et al. Altered global brain signal in schizophrenia. Proc. Natl. Acad. Sci. USA 2014, 111, 7438–7443. [Google Scholar] [CrossRef] [Green Version]
  113. Yang, G.J.; Murray, J.D.; Wang, X.J.; Glahn, D.C.; Pearlson, G.D.; Repovs, G.; Krystal, J.H.; Anticevic, A. Functional hierarchy underlies preferential connectivity disturbances in schizophrenia. Proc. Natl. Acad. Sci. USA 2016, 113, E219–E228. [Google Scholar] [CrossRef] [Green Version]
  114. Anticevic, A.; Hu, X.; Xiao, Y.; Hu, J.; Li, F.; Bi, F.; Cole, M.W.; Savic, A.; Yang, G.J.; Repovs, G.; et al. Early-course unmedicated schizophrenia patients exhibit elevated prefrontal connectivity associated with longitudinal change. J. Neurosci. 2015, 35, 267–886. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Cole, M.W.; Yang, G.J.; Murry, J.D.; RepovŠ, G.; Anticevic, A. Functional connectivity change as shared signal dynamics. J. Neurosci. Methods 2016, 259, 22–39. [Google Scholar] [CrossRef] [PubMed]
  116. Anticevic, A.; Gancsos, M.; Murray, J.D.; Repovs, G.; Driesen, N.R.; Ennis, D.J.; Niciu, M.J.; Morgan, P.T.; Surti, T.S.; Bloch, M.H.; et al. NMDA receptor function in large-scale anticorrelated neural systems with implications for cognition and schizophrenia. Proc. Natl. Acad. Sci. USA 2012, 109, 16720–16725. [Google Scholar] [CrossRef] [Green Version]
  117. Zhang, X.; Braun, U.; Harneit, A.; Zang, Z.; Geiger, L.S.; Betzel, R.F.; Chen, J.; Schweiger, J.L.; Schwarz, K.; Reinwald, J.R.; et al. Generative network models of altered structural Brain Connectivity in schizophrenia. Neuroimage 2021, 225, 117510. [Google Scholar] [CrossRef] [PubMed]
  118. Bansal, K.; Nakuci, J.; Muldoon, S.F. Personalized brain network models for assessing structure-functional relationships. Curr. Opin. Neurobiol. 2018, 52, 42–47. [Google Scholar] [CrossRef] [Green Version]
  119. Muldoon, S.F.; Pasqualetti, F.; Gu, S.; Cieslak, M.; Grafton, S.T.; Vettel, J.M.; Bassett, D.S. Stimulation-based control of dynamical brain networks. PLoS Comput. Biol. 2016, 12, e1005076. [Google Scholar] [CrossRef] [Green Version]
  120. Desikan, R.S.; Ségonne, F.; Fischl, B.; Quinn, B.T.; Dickerson, B.C.; Blacker, D.; Buckner, R.L.; Dale, A.M.; Maguire, R.P.; Hyman, B.T.; et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006, 31, 968–980. [Google Scholar] [CrossRef]
  121. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, E.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 2002, 15, 273–289. [Google Scholar] [CrossRef]
  122. Proix, T.; Spiegler, A.; Schirner, M.; Rothmeier, S.; Ritter, P.; Jirsa, V.K. How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models? Neuroimage 2016, 142, 135–149. [Google Scholar] [CrossRef]
  123. Deco, G.; Tononi, G.; Boly, M.; Kringelbach, M.L. Rethinking segregation and integration: Contributions of whole-brain modelling. Nat. Rev. Neurosci. 2015, 16, 430–439. [Google Scholar] [CrossRef]
  124. Markov, N.T.; Misery, P.; Falchier, A.; Lamy, C.; Vezoli, J.; Quilodran, R.; Gariel, M.A.; Giroud, P.; Ercsey-Ravasz, M.; Pilaz, L.J.; et al. Weight consistency specifies regularities of macaque cortical networks. Cereb. Cortex 2011, 21, 1254–1272. [Google Scholar] [CrossRef] [PubMed]
  125. Dimitriadis, S.I.; Drakesmith, M.; Bells, S.; Parker, G.D.; Linden, D.E.; Jones, D.K. Improving the reliability of network metrics in structural brain networks by integrating different network weighting strategies into a single graph. Front. Neurosci. 2017, 11, 736. [Google Scholar] [CrossRef] [PubMed]
  126. Messaritaki, E.; Dimitriadis, S.I.; Jones, D.K. Optimization of graph construction can significantly increase the power of structural brain network studies. Neuroimage 2019, 199, 495–511. [Google Scholar] [CrossRef]
  127. Conti, E.; Mitra, J.; Calderoni, S.; Pannek, K.; Shen, K.K.; Pagnozzi, A.; Rose, S.; Mazzotti, S.; Scelfo, D.; Tosetti, M.; et al. Network over-connectivity differentiates autism spectrum disorder from other developmental disorders in toddlers: A diffusion MRI study. Hum. Brain Mapp. 2017, 38, 2333–2344. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  128. Oxtoby, N.P.; Garbarino, S.; Firth, N.C.; Warren, J.D.; Schott, J.M.; Alexander, D.C. Data-driven sequence of changes to anatomical Brain Connectivity in sporadic Alzheimer’s disease. Front. Neurol. 2017, 8, 580. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  129. Betzel, R.F.; Bassett, D.S. Specificity and robustness of long-distance connections in weighted, interareal connectomes. Proc. Natl. Acad. Sci. USA 2018, 115, E4880–E4889. [Google Scholar] [CrossRef] [Green Version]
  130. Singh, M.F.; Braver, T.S.; Cole, M.W.; Ching, S. Estimation and validation of individualized dynamic brain models with resting state fMRI. Neuroimage 2020, 221, 117046. [Google Scholar] [CrossRef] [PubMed]
  131. Li, G.; Liu, Y.; Zheng, Y.; Wu, Y.; Li, D.; Liang, X.; Chen, Y.; Cui, Y.; Yap, P.T.; Qiu, S.; et al. Multiscale neural modeling of resting-state fMRI reveals executive-limbic malfunction as a core mechanism in major depressive disorder. Neuroimage 2021, 31, 102758. [Google Scholar] [CrossRef]
  132. Sip, V.; Petkoski, S.; Hashemi, M.; Dickscheid, T.; Amunts, K.; Jirsa, V. Parameter inference on brain network models with unknown node dynamics and spatial heterogeneity. bioRxiv 2022. [Google Scholar] [CrossRef]
  133. Hipp, J.F.; Hawellek, D.J.; Corbetta, M.; Siegel, M.; Engel, A.K. Large-scale cortical correlation structural of spontaneous oscillatory activity. Nat. Neurosci. 2012, 15, 884–890. [Google Scholar] [CrossRef] [Green Version]
  134. Brookes, M.J.; Hale, J.R.; Zumer, J.M.; Stevenson, C.M.; Francis, S.T.; Barnes, G.R.; Owen, J.P.; Morris, P.G.; Nagarajan, S.S. Measuring functional connectivity using meg: Methodology and comparison with fcMRI. NeuroImage 2011, 56, 1082–1104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Rabuffo, G.; Fousek, J.; Bernard, C.; Jirsa, V. Neuronal cascades shape whole-brain functional dynamics at rest. eNeuro 2021, 8, 0283-21. [Google Scholar] [CrossRef] [PubMed]
  136. Warbrick, T. Simultaneous EEG-fMRI: What have we learned and what does the future hold? Sensors 2022, 22, 2262. [Google Scholar] [CrossRef] [PubMed]
  137. Yu, Q.; Wu, L.; Bridwell, D.A.; Erhardt, E.B.; Du, Y.; He, H.; Chen, J.; Liu, P.; Sui, J.; Pearlson, G.; et al. Building an EEG-fMRI multi-modal brain graph: A concurrent EEG-fMRI study. Front. Hum. Neurosci. 2016, 10, 476. [Google Scholar] [CrossRef] [Green Version]
  138. Prokopiou, P.C.; Xifra-Porxas, A.; Kassinopoulos, M.; Boudriad, M.H.; Mitsis, G.D. Modeling the hemodynamic response function using EEG-fMRI data during eyes-open resting-state conditions and motor task execution. Brain Topogr. 2022, 35, 302–321. [Google Scholar] [CrossRef] [PubMed]
  139. Santanielloa, S.; Gale, J.T.; Sarma, S. Systems approaches to optimizing deep brain stimulation therapies in Parkinson’s disease. WIREs Syst. Biol. Med. 2018. [Google Scholar] [CrossRef] [Green Version]
  140. Fisher, R.S.; Velasco, A.L. Electrical brain stimulation for epilepsy. Nat. Rev. Neurol. 2014, 10, 261–270. [Google Scholar] [CrossRef]
  141. Johnson, M.D.; Lim, H.H.; Netoff, T.I.; Connolly, A.T.; Johnson, N.; Roy, A.; Holt, A.; Lim, K.O.; Carey, J.R.; Vitek, J.L.; et al. Neuromodulation for brain disorders: Challenges and opportunities. IEEE Trans. Biomed. Eng. 2013, 60, 610–624. [Google Scholar] [CrossRef]
  142. Srivastava, P.; Fotiadis, P.; Parkes, L.; Bassett, D.S. The expanding horizons of network neuroscience: From description to prediction and control. Neuroimage 2022, 258, 119250. [Google Scholar] [CrossRef]
  143. Pasqualetti, F. Controllability metrics, limitations and algorithms for complex networks. Trans. Control Netw. Syst. 2012, 1, 40–52. [Google Scholar] [CrossRef] [Green Version]
  144. Srivastava, P.; Nozari, E.; Kim, J.Z.; Ju, H.; Zhou, D.; Becker, C.; Pasqualetti, F.; Pappas, G.J.; Bassett, D.S. Models of communication and control for brain networks: Distinctions, convergence, and future outlook. Netw. Neurosci. 2020, 4, 1122–1159. [Google Scholar] [CrossRef] [PubMed]
  145. Gu, S.; Pasqualetti, F.; Cieslak, M.; Telesford, Q.K.; Yu, A.B.; Kahn, A.E.; Medaglia, J.D.; Vettel, J.M.; Miller, M.B.; Grafton, S.T.; et al. Controllability of structural brain networks. Nat. Commun. 2015, 6, 8414. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  146. Karrer, T.M.; Kim, J.Z.; Stiso, J.; Kahn, A.E.; Pasqualetti, F.; Habel, U.; Bassett, D.S. A practical guide to methodological considerations in the controllability of structural brain networks. J. Neural Eng. 2020, 17, 026031. [Google Scholar] [CrossRef] [Green Version]
  147. Liu, Y.Y.; Slotine, J.J.; Barabasi, A.L. Controllability of complex networks. Nature 2011, 473, 167–173. [Google Scholar] [CrossRef] [PubMed]
  148. Singh, M.; Cole, M.; Braver, T.; Ching, S. Developing control-theoretic objectives for large-scale brain dynamics and cognitive enhancement. Annu. Rev. Control. 2022. [Google Scholar] [CrossRef]
  149. Singh, M.; Wang, M.; Cole, M.; Ching, S. Efficient Identification for Modeling High-Dimensional Brain Dynamics; IEEE: Piscataway, NJ, USA, 2022; pp. 1353–1358. [Google Scholar]
  150. Ljung, L. System Identification: Theory for the User; Prentice Hall: Hoboken, NJ, USA, 1987. [Google Scholar]
  151. Yang, Y.; Connolly, A.T.; Shanechi, M.M. A control-theoretic system identification framework and a real-time closed-loop clinical simulation testbed for electrical brain stimulation. J. Neural Eng. 2018, 15, 066007. [Google Scholar] [CrossRef]
Figure 1. According to the level of detail and neurophysiological relevance, models of node dynamics can be divided into biophysical models, oscillator models, and bifurcation models. The mean-filed models and Kuramoto oscillators are the most commonly used models of node dynamics.
Figure 1. According to the level of detail and neurophysiological relevance, models of node dynamics can be divided into biophysical models, oscillator models, and bifurcation models. The mean-filed models and Kuramoto oscillators are the most commonly used models of node dynamics.
Electronics 11 03468 g001
Figure 2. The pipeline for fitting BNMs to time-averaged FC. The node dynamics are modeled by neural mass models to output average firing rates or membrane potentials. SC is estimated by tractography combined with parcellation schema and acts as edges of brain networks. Then, the mesoscopic neuronal populations are connected by SC to simulate macroscopic spatiotemporal brain activity. The output of each node is fed into the hemodynamic model to simulate BOLD time-series, then correlation coefficients (such as Pearson’s correlation coefficient) in the whole-time interval are computed for time-series of each pair of nodes to construct time-averaged FC. The Euclidean distance is used to evaluate the similarity between empirical FC and simulated FC [53,66], based on which model parameters can be optimized.
Figure 2. The pipeline for fitting BNMs to time-averaged FC. The node dynamics are modeled by neural mass models to output average firing rates or membrane potentials. SC is estimated by tractography combined with parcellation schema and acts as edges of brain networks. Then, the mesoscopic neuronal populations are connected by SC to simulate macroscopic spatiotemporal brain activity. The output of each node is fed into the hemodynamic model to simulate BOLD time-series, then correlation coefficients (such as Pearson’s correlation coefficient) in the whole-time interval are computed for time-series of each pair of nodes to construct time-averaged FC. The Euclidean distance is used to evaluate the similarity between empirical FC and simulated FC [53,66], based on which model parameters can be optimized.
Electronics 11 03468 g002
Figure 3. Model fitting through dynamic FC. Sliding window is the commonly used method to analyze dynamic FC. For simulated or empirical BOLD time-series, the window is slid over the entire time-series and FC matrices at each sliding window are computed to obtain series of FC. Next, the FC matrices at different time windows are compared with each other using a pairwise correlation to form another matrix called FCD matrix where each axis encodes time points of the series of FC. To fit the BNM, the Kolmogorov–Smirnov distance between empirical and simulated FCD distributions is minimized [65,75].
Figure 3. Model fitting through dynamic FC. Sliding window is the commonly used method to analyze dynamic FC. For simulated or empirical BOLD time-series, the window is slid over the entire time-series and FC matrices at each sliding window are computed to obtain series of FC. Next, the FC matrices at different time windows are compared with each other using a pairwise correlation to form another matrix called FCD matrix where each axis encodes time points of the series of FC. To fit the BNM, the Kolmogorov–Smirnov distance between empirical and simulated FCD distributions is minimized [65,75].
Electronics 11 03468 g003
Figure 4. The framework of brain network sequence autoencoder. The measurements at each time points are passed into the LSTM one-by-one, LSTM map the input into a latent space which is a smaller space constrained by the BNM. Then, this latent variable is input into the BNM as an initial condition, and the measurement at next time point is estimated by BNM forward equations. The system is trained by mismatch between the actual measurement and estimated measurement at the next time point [81,82].
Figure 4. The framework of brain network sequence autoencoder. The measurements at each time points are passed into the LSTM one-by-one, LSTM map the input into a latent space which is a smaller space constrained by the BNM. Then, this latent variable is input into the BNM as an initial condition, and the measurement at next time point is estimated by BNM forward equations. The system is trained by mismatch between the actual measurement and estimated measurement at the next time point [81,82].
Electronics 11 03468 g004
Table 1. Comparison between multiscale BNMs and other brain models.
Table 1. Comparison between multiscale BNMs and other brain models.
Network PropertiesMultiscale BNMsDescriptive BNMsDCM
Node DynamicsDifferential equationsNADifferential equations
Network ConnectionsSCSC
FC
EC
EC
Model FittingStatic FC
Dynamic FC
BOLD signals
Static SC
Static FC
Static EC
Static FC
Dynamic FC
BOLD signals
Network SizeLarge-scaleLarge-scaleRelatively small networks
Biophysical FoundationFiring rate
Membrane potential
Excitability
Inhibitory
NANA
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lu, M.; Guo, Z.; Gao, Z.; Cao, Y.; Fu, J. Multiscale Brain Network Models and Their Applications in Neuropsychiatric Diseases. Electronics 2022, 11, 3468. https://doi.org/10.3390/electronics11213468

AMA Style

Lu M, Guo Z, Gao Z, Cao Y, Fu J. Multiscale Brain Network Models and Their Applications in Neuropsychiatric Diseases. Electronics. 2022; 11(21):3468. https://doi.org/10.3390/electronics11213468

Chicago/Turabian Style

Lu, Meili, Zhaohua Guo, Zicheng Gao, Yifan Cao, and Jiajun Fu. 2022. "Multiscale Brain Network Models and Their Applications in Neuropsychiatric Diseases" Electronics 11, no. 21: 3468. https://doi.org/10.3390/electronics11213468

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop