Skip to main content

REVIEW article

Front. Hum. Neurosci., 16 June 2022
Sec. Brain Imaging and Stimulation
Volume 16 - 2022 | https://doi.org/10.3389/fnhum.2022.875201

Behavioral Studies Using Large-Scale Brain Networks – Methods and Validations

Mengting Liu1* Rachel C. Amey2* Robert A. Backer2 Julia P. Simon3 Chad E. Forbes4
  • 1School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
  • 2Department of Psychological and Brain Sciences, University of Delaware, Newark, DE, United States
  • 3Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
  • 4Department of Psychology, Florida Atlantic University, Boca Raton, FL, United States

Mapping human behaviors to brain activity has become a key focus in modern cognitive neuroscience. As methods such as functional MRI (fMRI) advance cognitive scientists show an increasing interest in investigating neural activity in terms of functional connectivity and brain networks, rather than activation in a single brain region. Due to the noisy nature of neural activity, determining how behaviors are associated with specific neural signals is not well-established. Previous research has suggested graph theory techniques as a solution. Graph theory provides an opportunity to interpret human behaviors in terms of the topological organization of brain network architecture. Graph theory-based approaches, however, only scratch the surface of what neural connections relate to human behavior. Recently, the development of data-driven methods, e.g., machine learning and deep learning approaches, provide a new perspective to study the relationship between brain networks and human behaviors across the whole brain, expanding upon past literatures. In this review, we sought to revisit these data-driven approaches to facilitate our understanding of neural mechanisms and build models of human behaviors. We start with the popular graph theory approach and then discuss other data-driven approaches such as connectome-based predictive modeling, multivariate pattern analysis, network dynamic modeling, and deep learning techniques that quantify meaningful networks and connectivity related to cognition and behaviors. Importantly, for each topic, we discuss the pros and cons of the methods in addition to providing examples using our own data for each technique to describe how these methods can be applied to real-world neuroimaging data.

Introduction

A key challenge in cognitive neuroscience is determining how human behaviors or mental representations map onto patterns of neural activity. Research traditionally hypothesizes that specific human behavior or cognitions are closely associated with the activity of a single brain region. Modern neuroscience methods, specifically the development of fMRI, have expanded the human neuroimaging scope from identifying regional activation in brain images to communication between pairs of brain regions (Mill et al., 2017). However, evidence now overwhelmingly indicates that whole-brain functional and network activations can be indexed to provide insight into the mechanisms behind human behaviors.

Whole-brain networks play a fundamental role in neuroscience, and numerous scientists have been fascinated by their ability to reveal the brain’s intricate functional properties. Whole-brain networks identify neural connectivity in a relatively unbiased manner. That is, one must differentiate the meaningful signatures of neural activity that underlie behaviors from noisy or redundant neural activity. Setting up reliable functional connections, or brain network-based neuromarkers, is pivotal in investigating human behaviors. However, selecting the best neuromarkers in relation to behaviors from the intensive whole-brain network is difficult.

Measuring whole-brain neural network activity is complex. It has been suggested that the brain acts as a parallel processor, meaning that multiple regions influence one another across whole-brain neural networks simultaneously. Associations between these whole-brain neural connections and human behavior often require multi-level approaches that have the potential to include every single connection, regional organization, and the whole-brain topological architecture. The noisy nature of brain activity, let alone whole brain activity, requires a more sensitive statistical approach to detect the robust associations with behaviors. Furthermore, recent theories have emphasized that neural computation may be more dynamic than previously thought (Langdon and Chaudhuri, 2021). In other words, networks previously associated with behavior may need to be reconsidered in a more dynamic fashion. With the help of recent advances in statistical methods it is now much easier to find clear brain-behavior associations from unbiased whole-brain networks in static and dynamic time series. However, given the complexity of the results, meaningful interpretations of identified brain regions can be a challenge. Selecting an optimal technique is integral for the interpretation of results.

This review focuses on two topics: First, how analyzing whole-brain neural networks can facilitate our understanding of neural mechanisms and build models of behavior. Second, this review focuses on implementing a multi-level approach, both spatially and temporally, to obtain unbiased whole-brain neural network results. We review popular approaches to analyze whole-brain activity such as graph theory, connectome-based predictive modeling, multivariate pattern analysis, network dynamic modeling, and deep learning techniques that quantify meaningful whole-brain networks regarding cognition and behavior. Importantly, each technique will be reviewed with pros and cons of its application to inform readers of the best approach for their data (a table is also provided for convenience, Table 1). Furthermore, we also provide examples for each technique, describing how the method can be applied to real-world neuroimaging data.

TABLE 1
www.frontiersin.org

Table 1. The strength and weakness of each method in behavioral prediction scenarios.

Method

In this paper, we will use our own data to demonstrate the usage of the above-mentioned data-driven approaches for whole-brain analyses, illustrating how whole-brain data-driven approaches can provide additional insight into cognition studies.

General Procedure

Sixty-five participants (33 males) came into the lab to complete two cognitive tasks. The first was a problem-solving task consisting of math problems and performance feedback. The second was a memory test that indexed what participants saw during the problem-solving task. Live electroencephalogram (EEG) recordings were completed during the entire experimental session. Participants were seated in a sound proofed chamber, set up with an EEG, and were instructed to begin the task which was displayed on a computer screen in front of them. Participants were able to move through the task by pressing buttons on a button box placed in their laps. The present set up minimizes movement from the participant which can contribute to noisy neural data. The problem-solving task consisted of a 34-min math task consisting of standard multiplication and division problems (e.g., 10 × 20 =) that initial pilot tests confirmed varied in degree of difficulty to ensure all participants would solve problems correctly and incorrectly. During each trial, participants were given three answer choices below each problem (A, B, or C) presented on the screen, with the answer to each problem randomly presented in one of the three answer positions on each trial. Participants made all answer selections via the button box placed in their laps and did not have scratch paper or a calculator. After each response, participants received feedback for 2 s that indicated whether their selected answer to the math problem was wrong or correct. To assess memory for feedback (indexed in the second task), the words “Wrong” or “Correct” were presented in a novel font on every trial (see Forbes et al., 2015; Forbes et al., 2018 for examples). Participants were given 16 s to solve each problem. If participants were unable to answer a problem within that time frame, they would receive negative feedback (i.e., “Wrong” would appear on the screen). Participants completed an average of 83.9 problems. The present data is ideal for this review as it ensured participants went through myriad cognitive processes while live EEG was recorded. Furthermore, the repetitive nature of the math and memory tests (further described below) ensured we would have enough trials per stimuli to test these advanced methods.

Memory Test

Like Forbes et al. (2018), participants were presented with a surprise memory test containing 400 trials after the problem-solving task while EEG data was recorded simultaneously. Among the 400 trials, participants were presented with each font/feedback pairing they had previously seen during the problem-solving task, i.e., each performance feedback stimulus that was shown for 2 s, with the remaining trials acting as “lures.” During each trial, participants were randomly presented with the words “wrong” or “correct” written in one of the 200 different fonts in the middle of a computer screen. A scale was presented below each font/feedback combination and participants were asked to indicate whether they had seen the combination during the problem-solving task using a six-point scale (1 = I know I didn’t see it, 4 = I think I saw it, 6 = I know I saw it). If participants were presented with a previously seen font, responses of 4–6 were classified as hits, and responses of 1–3 were classified as misses. If participants were presented with a novel font, responses of 4–6 were classified as false alarms, and responses of 1–3 were classified as correct rejections. Using these classifications, we calculated d-prime scores to measure participants’ ability to accurately discriminate seen from unseen fonts. Prior research suggests that d-prime score is a more sensitive assessment of memory effects that accounts for guessing (Wickens, 2001). To calculate d-prime scores, z scores for false alarm rates were subtracted from z scores for hit rates. Because z scores for 0 or 1 cannot be calculated, participants without hits were given scores of 0.1 and participants with perfect scores were given a score of 0.9. Therefore, larger d-prime scores indicate that participants were better at discriminating between previously seen fonts and lures, i.e., had more accurate memory recall. Within the results presented in this review, d-prime will serve as a proxy for memory accuracy.

Electroencephalogram Recording

Continuous EEG activity was recorded using an ActiveTwo head cap and the ActiveTwo Biosemi system (BioSemi, Amsterdam, Netherlands). Recordings were collected from 128 Ag-AgCl scalp electrodes and from bilateral mastoids. Two electrodes were placed next to each other 1 cm below the right eye to record startle eye-blink responses. A ground electrode was established by BioSemi’s common Mode Sense active electrode and Driven Right Leg passive electrode. EEG activity was digitized with ActiView software (BioSemi) and sampled at 2,048 Hz. Data was downsampled post-acquisition and analyzed at 512 Hz.

Electroencephalogram Preprocessing

For performance feedback analyses, the EEG signal was epoched and stimulus locked from 500 ms pre-feedback presentation to 2,000 ms post-feedback presentation. For memory test analyses, EEG signal was epoched and stimulus locked from 500 ms pre-performance feedback presentation (previously seen font/feedback combinations or lures) to 1,000 ms post-feedback presentation. EEG artifacts were removed via FASTER (Fully Automated Statistical Thresholding for EEG artifact Rejection) (Nolan et al., 2010), an automated approach to cleaning EEG data that is based on multiple iterations of independent component and statistical thresholding analyses. Specifically, raw EEG data was initially filtered through a band-pass FIR filter between 0.3 and 55 Hz. Then, EEG channels with significant unusual variance (absolute z score larger than 3 standard deviations from the average), mean correlations with other channels, and Hurst exponents were removed and interpolated from neighboring electrodes using a spherical spline interpolation function. EEG signals were then epoched and baseline corrected; epochs with significant unusual amplitude range, variance, and channel deviation were removed. The remaining epochs were then transformed through ICA. Independent components with significant unusual correlations with EOG channels, spatial kurtosis, slope in the filter band, Hurst exponent, and median gradient were subtracted and the EEG signal was reconstructed using the remaining independent components. In the last step, EEG channels within single epochs with significant unusual variance, median gradient, amplitude range, and channel deviation were removed and interpolated from neighboring electrodes within the same epochs.

Source Reconstruction

All a priori sources used in network connectivity analyses were identified and calculated via forward and inverse models utilized by MNE-python (Gramfort et al., 2013, 2014). The forward model solutions for all source locations located on the cortical sheet were computed using a 3-layers boundary element model (BEM) (Hamalainen and Sarvas, 1989) constrained by the default average template of anatomical MNI MRI. Cortical surfaces extracted with FreeSurfer were sub-sampled to approximately 10,240 equally spaced vertices on each hemisphere. The noise covariance matrix for each individual was estimated from the pre-stimulus EEG recordings after preprocessing. The forward solution, noise covariance and source covariance matrices were used to calculate the dynamic statistical parametric mapping (dSPM) estimated inverse operator (Dale et al., 1999, 2000). The inverse computation was done using a loose orientation constraint (loose = 0.2, depth = 0.8) (Lin et al., 2006). Using depth weighting and noise normalization approaches, dSPM inverse operators have been reported to help characterize distortions in cortical and subcortical regions, and improve the bias accuracy of neural generators in deeper structures, e.g., the insula (Attal and Schwartz, 2013). The cortical surface was divided into 68 anatomical regions (i.e., sources) of interest (ROIs; 34 in each hemisphere) based on the Desikan–Killiany atlas (Desikan et al., 2006) and signal within a seed voxel of each region was used to calculate the power within sources and phase locking (connectivity) between sources.

Functional Connectivity Estimation and Network Construction

Frequency coupling was calculated within identical frequency bands and temporal periods between all pairs of nodes. Phase locking values (PLV) (Lachaux et al., 1999), which measure variability of phase between two signals across trials, were utilized to define connectivity strength. In other words, for every participant, condition, and frequency band, we obtained a symmetric 68 × 68 adjacency matrix, representing all pairs of nodes – or edges – in each participant’s whole-brain network during a given period. For the memory task period, PLVs were averaged from the first 500 ms after the memory task appeared. For the resting state period, PLVs were averaged from the first 500 ms after the onset of the initial fixation cross.

Graph Theory

Graph theory is the one of the earliest attempts to manipulate large-scale brain networks in cognitive studies. Graph theory allows researchers to integrate multiple regions in an analysis to describe neural network architecture using a global view. As mentioned previously, in the realm of social and behavioral neuroscience, neural activity has often been conceptualized by investigating region-based activity. Graph theory, however, allows one to capture a more wholistic description of the brain by observing the connectivity and neural architecture between regions either in a specific a priori network, or across the whole brain, instead of focusing on one area specifically. Within graph theory, modularity, efficiency, and network hubs are standard measures to observe the underlying neural architecture behind various cognitive states and behaviors. We break these measures down below (Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. Typical architectural features of functional brain networks. (A) The simplest model is entirely random structure. (B) Networks with modular structure, divided into communities with dense connectivity. (C) Small-world networks, which balance efficient communication and high clustering. (D) Networks with hub structure, characterized by a heavy-tailed degree distribution.

Community Structure

Functional segregation within whole-brain networks and subnetworks plays an essential role in the representation of cognitive states and can be defined by modularity. Modularity quantifies the amount of densely connected nodes, or modules, within a network (Girvan and Newman, 2002; Bullmore and Sporns, 2009). Modules in functional brain networks are thought to represent groups of brain regions that are collectively involved in one or more cognitive domains. Importantly, regions that are anatomically or functionally close to one another are likely to be members of the same cluster or module and share information with one another.

Quantifying neural activity within modularity allows researchers to operationalize neural configurations across the brain. Modularity is often calculated based on hierarchical clustering. In this case, smaller groups of nodes are organized into larger clusters maintaining a scale-free topology (Girvan and Newman, 2002). If a network has high modularity, it can be said to be more functionally segregated, i.e., a subnetwork of nodes within a given network has higher connectivity within itself than it does with the rest of the network (Kim et al., 2020). However, how this network is segregated in relation to the rest of the brain is not quantified by this measure of modularity. Usually, a brain network module that is psychologically meaningful (e.g., working memory network) can be widely distributed among several anatomical modules across the brain.

Efficiency

Complex whole-brain networks promoted by higher modularity are often more stable and synchronous (Papo et al., 2014). Another important characteristic of cognitive states that can be defined by graph theory is efficiency, or the high output of information transfer with a low connection cost between nodes (Stanley et al., 2015; Cohen et al., 2016). High global efficiency varies between cognitive states and has ramifications on numerous cognitive processes. For example, increased efficiency has positive outcomes on processes such as working memory and spatial orientation ability (Stanley et al., 2015). In these cases, greater neural network efficiency led to both increases in working memory and spatial orientation ability. Greater efficiency has also correlated to better executive task performance and intelligence (Bassett et al., 2009; Li et al., 2009; Van Den Heuvel et al., 2009). These studies all demonstrate the importance of efficiency in representing high functioning cognitive states.

There are a few ways to operationalize efficiency (Rubinov and Sporns, 2010). The most common way to operationalize efficiency is small-worldness. Small-worldness defines a network that is highly clustered, but has short characteristic path lengths (Bullmore and Bassett, 2011). This small-world like structure gives networks a unique property, as they have regional specialization and efficient information transfer across broader regions. If a whole-brain network has a high degree of small-worldness, one would then be able to infer that although it has regional segregation, whole-brain information transfer is still efficient. We can gauge the network efficiency by analyzing global network efficiency (GNE) as well. Global network efficiency is a graph theory-based measure that offers perspective about complex mental tasks that we expect to elicit widespread reorganization in the brain (Forbes et al., 2018), i.e., whole-brain reorganization. During cognitively demanding processes requiring more reciprocal communication between remote, specialized areas, an efficient network organization may dynamically facilitate better coordination.

Network Hubs

All cognitive states require the integration of distributed neural activity across the whole brain. However, it is often the case that specific nodes within these neural networks drive this activity. Utilizing graph theory, analyses can identify these key network hubs that are essential for neural communication and integration. Understanding these hubs provides essential information about the underpinnings of complex cognitive states; functional segregation and specialization are essential for cognitive function. Two types of hubs are essential in describing these cognitive states. For example, if one is interested in one subnetwork in the brain, they could utilize provincial hubs. Provincial hubs are hubs that are mainly connected to nodes within their own network modules. On the other hand, if one was more interested in whole-brain states, one could examine connector hubs. Connector hubs are regions that are highly connected to nodes in other network modules, speaking more to whole-brain connectivity (Guimera and Amaral, 2005). Numerous studies have conducted analyses that note the existence of specific sets of hub regions in various cognitive states, and brain developmental stages (Xu et al., 2021). Specifically, for global communication processes, the precuneus, insular superior parietal, and superior frontal regions have been cited as essential network hubs integral to multiple cognitive processes (Iturria-Medina et al., 2008). Observing network hubs from resting state brain activity has also provided evidence to suggest that communication within the human brain is not random, but rather it is organized to maximize efficient global processing. It is also found that the most pronounced functional connections are found between network hubs that share a common function (Honey et al., 2009).

Pros and Cons to Graph Theory

Graph theory is an invaluable tool when it comes to quantifying the brain’s network architecture in relation to cognition and behavior by measuring modularity, efficiency, and network hubs. For example, graph theory can allow us to obtain a global, or whole-brain, view of the brain’s configuration during a given cognitive task, providing deeper understanding regarding specific network and regional measures. Given the highly adaptive nature of brain network organization to task demands, this not only indicates the extent to which processes draw on multiple functional organizations, but also provides information about the actual role of the individual communities or sub-networks themselves. Yet these measures still have their limitations. One critical limitation is that the neuro-cortical interpretation of global graph theoretical measures is ambiguous, especially when researchers associate them with human behavioral or cognitive task scores. For example, results often can suggest that one type of stimulus may be more associated with a global network measure over other types. The reasons behind these associations are often unclear; one cannot determine which brain regions are more efficiently connected to others and which regions are not. Even for subjects with equivalent brain network efficiencies, biases toward specific stimuli may be caused by different network organizations. Thus, comparing global measures is not ideal for determining the cognitive mechanisms behind stimuli bias. However, it may be more useful to further contextualize broader graph theory findings by supplementing them with other network analysis methods that target activity in smaller networks, regions, or relationships with behavior.

Real Data Example

Memory retrieval draws upon multiple functional processes (vision, memory, reasoning, etc.). Because memory retrieval relies on multiple functional processes, it can be expected that more brain regions would communicate with one another during memory retrieval than at rest. Thus, we hypothesized that memory retrieval tasks (remembering unique performance feedback previously displayed during the problem-solving task) would elicit greater global network efficiency in connection with retrieval accuracy (i.e., a more efficient brain network would support better memory performance). Global network efficiency was operationalized using small-worldness which quantifies whole-brain neural efficiency. Linear regressions were conducted on memory d-prime scores, operationalizing memory accuracy as stated in the methods, and small-worldness. Significant effects were found between small-worldness and d-prime scores for feedback fonts during the retrieval task for all frequency bands (Theta: β = 1.33, F[1,70] = 4.50, R2 = 0.06, p = 0.027; Alpha: β = 1.95, F[1,70] = 7.73, R2 = 0.10, p = 0.007; Beta: β = 1.92, F[1,70] = 5.92, R2 = 0.08, p = 0.017; Gamma: β = 1.62, F[1,70] = 4.98, R2 = 0.07, p = 0.027). All frequency bands demonstrated a positive relationship suggesting the more global efficiency was present, the better memory recall was. This finding illustrates our hypothesis that global network efficiency may positively support cognitive task performance.

Connectome-Based Predictive Modeling

Setting up a reliable neural-behavior relationship is pivotal in modern cognition studies. Graph theory only provides a “qualitative” evaluation between brain network organizations and human behaviors. In other words, graph theory investigates brain networks from the perspective of topological organizations. However, it is often more important to understand exactly which communications, between pairs of brain regions, contribute to cognitive functions. To date, the establishment of reliable neural-behavior relationships is challenging and a prominent question (Poldrack, 2010; Barch et al., 2013) in neuroimaging studies. Connectome predictive modeling (CPM) can provide insight.

Connectome predictive modeling (CPM; Shen et al., 2017) can reveal the nuances of activity within subnetworks and across the whole-brain using fully data-driven analyses that allow researchers to examine neural activity related to behavior without any prior bias, i.e., predefined brain regions based past literature (see the description of whole brain meta-analyses in the Supplementary Material for further detail). This approach also provides the opportunity to find relationships between novel connectivity and behavioral scores by exploring every single connection within the whole-brain network.

Connectome predictive modeling is particularly useful in predicting human behavior scores. The first step in CPM is to examine each functional connection between all brain regions from the whole-brain network to observe whether they correlate with behavior scores. Often scores that reach a p = 0.001 threshold are suggested to be meaningful for this initial step (Rosenberg et al., 2016). Next, a linear model is built to maximize the fit between the summation of selected functional connectivity values and behavior scores. In the last step, CPM uses the linear model to predict behavior scores in new individuals. Currently, CPM doesn’t depend on sophisticated mathematical measures. Instead, CPM discovers meaningful patterns by using simple linear regression models. Because of this approach, it is especially helpful for psychologists, neuroscientists, and clinicians (Cheng et al., 2021) who may have limited background knowledge of more complicated quantitative approaches such as multi-variant pattern analysis (MVPA) and machine learning techniques.

Because of the linear regression approach of CPM, an important aspect of the method is validation. With so many connections tested, the threshold of p = 0.001 (Rosenberg et al., 2016; or the threshold chosen by the researcher) during the first step may not be enough to filter through neural noise (i.e., false positives). Thus, validation is needed. According to Rosenberg et al. (2015, 2016, 2017), CPM validation could be internal or external. Internal validation requires results to be validated using a leave-one-subject-out or multi-fold cross-validation procedure. In these two procedures, significant associations between behavior and neural connectivity are identified in all subjects except ones that are left-out. A linear model is then built to best fit the relationship between network connectivity strength and the representative behavior score. Next, the left-out participant’s network strength is incorporated into the linear model and a predicted behavior score is given for the participant. This step repeats for all participants in the group. If the participants’ predicted behavior scores and original behavior scores are significantly correlated, it suggests that the connectome feature and model are able to predict novel individual’s (or the left-out subject’s) behavior scores. External validation takes a different approach using a train test method. External validation tests the connectivity of the model, built on one set of data, on a completely independent set of data. For this reason, this type of validation has been said to be more rigorous but also a more generalized approach, i.e., to be able to predict behaviors from unseen subjects without overfitting (Cohen et al., 2020; Boutet et al., 2021). These steps are illustrated in Figure 2.

FIGURE 2
www.frontiersin.org

Figure 2. The CPM approach identifies functional connectivity networks that are related to behavior and measures strength in these networks in previously unseen individuals to make predictions about their behavior. First, every participant’s whole-brain connectivity pattern is calculated by correlating the fMRI activity time courses of every pair of regions, or nodes, in a brain atlas. Next, behaviorally relevant connections are identified by correlating every connection in the brain with behavior across subjects. Connections that are most strongly related to behavior in the positive and negative directions are retained for model building. A linear model relates each individual’s positive network strength (i.e., the sum of the connections in their positive network) and negative network strength (i.e., the sum of the connections in their negative network) to their behavioral score. The model is then applied to a novel individual’s connectivity data to generate a behavioral prediction. Predictive power is assessed by correlating predicted and observed behavioral scores across the group.

Given the predictive nature of validation, CPM can also be applied to behavior prediction hypotheses. That is, a unique pattern of neural activation in associated with specific behaviors is able to predict behavior scores across different cognitive states or populations, helping to detect the cognitive state and population differences. For example, using CPM, Rosenberg et al. (2017) identified a functional brain network whose connectivity strength predicted individual differences in sustained attention performance. The identified network generalized to previously unseen individuals recruited both from the United States and China, in addition to children and adolescents. The identified network was also able to predict sustained attention scores quantified from multiple cognitive tasks designed for measuring an individual’s attention capability using both resting-state and task-evoked brain states. Together results suggest that this network may be a generalized model for sustained attention. Yet, CPM provided nuanced details by revealing novel connectivity between regions within the network which, previously, were not related to attention capability. In another example, Liu et al. (2021b) applied CPM in a more sophisticated manipulation. The experiment had participants complete challenging math problems in stressful and normal contexts. Results suggested that a resting-state network, revealed by CPM after examining the whole-brain connections, directly related to negative math performance in the stress group. The network did not successfully predict math performance in control contexts. Results suggest that stressful situations may impede the brain network’s transition from a maladaptive state in resting state to a more adaptive state during the cognitive tasks.

Pros and Cons of Connectome Predictive Modeling

Unlike the graph theoretical approaches, CPM provides an option to investigate the relationship between brain connectivity and behaviors within specific brain regions of interest, and the whole-brain, using a data driven approach. Indeed, in conventional hypothesis-driven approaches for exploring specific cognitive functions, researchers often intuitively search for brain regions related to a cognitive function in a meta-analysis database, and then check if connectivity, or any types of network organization, in these brain regions is associated with behavioral scores. CPM provides another solution: it allows a data-driven search mechanism to discover a predictive functional network. CPM’s strength is its ability to synthesize neural and behavioral activity, making it an integral tool for cognitive and behavioral neuroscientists. However, one downfall of CPM is that the functional networks discovered may exist in the brain regions without any direct association with the task at hand. Combating this issue is still an ongoing process (Rosenberg et al., 2017). However, findings are perceived to be meaningful due to CPM’s powerful ability to predict behavior.

Real Data Example

In our dataset, we wanted to use CPM during memory retrieval to predict memory accuracy for the wrong and correct feedback stimuli presented during the problem-solving task. Regressions between each edge in connectivity matrices during memory retrieval and behavioral memory performance scores for correct and wrong feedback stimuli were measured across n-1 subjects and used to assess the relevance of functional connections to behavior. The p-value in each regression between neural connectivity and behavioral outcomes (memory score for the presented feedback) was recorded in a 68 × 68 symmetric matrix (see the “Methods” section for a more detailed description of the symmetric matrix) from each frequency band, resulting in 2,244 × 4 = 8,976 p-values for each regression. To find the most significant associations between specific connectivity and the memory scores for both correct and wrong performance feedback stimuli, as well as to control for multiple comparisons, the resulting p-values were held to a 0.001 threshold (Rosenberg et al., 2016) as described above. A single summary statistic, network strength, was used to characterize each participant’s degree of connectivity by averaging all edges found below the threshold. To ensure our results pertained to positive effects on memory performance, we only involved edges in the positive tail. These edges represent a positive effect on memory performance. The identified network, constructed from the significant positive edges was then utilized for the left-out participant with both correct and wrong performance feedback stimuli to test their predictive power to memory score. This procedure repeated n times via a leave-one-out cross-validation to validate the network discovered.

Results show that CPM successfully identified a functional network that significantly predicted memory score for correct and wrong performance feedback stimuli, respectively, (correct: β = 0.47, F[1,70] = 33.13, R2 = 0.32, p < 0.0001; wrong: β = 0.51, F[1,70] = 46.30, R2 = 0.40, p < 0.0001). In addition, to investigate whether memory retrieval performance for correct and wrong performance feedback stimuli relies on the same functional network, networks identified in each leave-one-out round were also used to predict the memory performance for the left-out participant in the alternative memory tasks, i.e., functional networks found during the memory of correct performance feedback stimuli was used to predict the memory of wrong performance feedback stimuli, and vice versa. Results indicated that networks found in the correct performance feedback stimuli memory and networks found in the wrong performance feedback stimuli memory could not be used interchangeably for prediction, suggesting that the memory retrieval process for correct performance feedback stimuli and wrong performance feedback stimuli may rely on different connectivity, and in turn, different neural mechanisms.

Multi-Voxel Pattern Analysis (MVPA) in Brain Networks

Multi-voxel pattern analysis (MVPA) seeks to enhance the sensitivity of detecting neural representations and cognitive states by looking at the contributions of activity from all regions of the brain (Norman et al., 2006; Mill et al., 2017). MVPA also has the capability to establish more reliable and generalized neural patterns that correspond to cognitive functions. In other words, MVPA focuses on the statistical nature of mental representations in available data and how reliably this representation can be mapped to novel and unseen data. This is exactly what machine learning techniques revolve around. Hence, MVPA is considered a technique that leverages machine learning and multivariate statistics to identify the cognitive states with distributed neural activity (Haxby et al., 2001). To achieve categorization, a common approach involves removing a part of the available data and using it to test the categorizations built on the remaining data. This train-test method is a form of cross validation. Another key step in MVPA is to establish a model that statistically describes the relations between neural representation and cognitive states in available data. This relies on advanced statistical techniques, e.g., non-linear fitting, support vector machines, artificial neural networks, and the deep learning (for details about deep learning please see deep learning in brain networks section).

In the first MVPA study in cognitive neuroscience (Haxby et al., 2001), illustrated that collapsing activity from multiple voxels together can be used in a well-trained model to distinguish which object categories subjects were viewing. Currently, there are many ways to conduct MVPA analyses in cognitive science. The main category of MVPA is classification (including regression which can be considered classification on continuous cognitive output) and representational learning (RSL). Although MVPA is traditionally conducted on individual neural regions, it has recently been implemented with functional connectivity and networks (Anzellotti and Coutanche, 2018).

In network-based classification, MVPA usually starts with a feature selection approach, which utilizes a t-test, e.g., (Wei et al., 2018) or F-test, e.g., (Abraham et al., 2014) that statistically tests differences between functional connectivity across the whole-brain. Significant test values are utilized to quantify areas that have the most substantial differences in connectivity. A classifier is built from these values to categorize cognitive states based on the selected connectivity. For example, Dosenbach and colleagues (Dosenbach et al., 2010) successfully separated children’s and adults’ brains using functional network MVPA in resting state. This work was further developed, leading to a new branch in neuroscience – brain age prediction. Indeed, Li et al. (2018) successfully predicted subjects’ brain age using functional network in resting-state and MVPA regression. It is worth noting that feature selection in MVPA using a data-driven approach, e.g., t-test or F-test, also faces the problem of interpretation. Results may be ambiguous. A good MVPA model not only yields good classification and prediction output, but also labels the brain representations well, i.e., the selected brain features uncovers the proper cognitive mechanisms according to the cognitive study. Indeed, Dosenbach et al. (2010) found that the weakening of short distance connections and strengthening of long-distance connections may predict brain maturity. Although vague, this result is reasonable, as the integration of more distant brain regions (through long distance connections) has been suggested to indicate more complex cognitive functions present in older humans.

Another critical MVPA approach is representational similarity analysis (RSA) (Kriegeskorte et al., 2008). RSA is a multivariate method that can be used to extract information about distributed patterns of representations across the brain (Dimsdale-Zucker and Ranganath, 2018). Rather than trying to categorize the neural representations into corresponding cognitive states like MVPA classification, RSA uses the representational distance (or, more generally, dissimilarities) between neural activity patterns associated with the cognitive states as a summary statistic to classify (Mack et al., 2013; Diedrichsen and Kriegeskorte, 2017). More generally, after all distances are measured from several cognitive states, a matrix can be constructed. From this matrix, the representational dissimilarity matrix (RDM), where the representational distance (or dissimilarity) between pairs of cognitive states can be indexed and further deciphered, can be constructed. These types of analyses are called representational geometry (Kriegeskorte and Kievit, 2013; Freeman et al., 2018).

In RSA analysis, the first step is to choose a brain feature (e.g., connectivity of interest) and estimate the activity pattern. The second step is to estimate the RDM. The most commonly used distance measure is the correlation distance (Pearson correlation across features selected), yet other distance measures such as the Euclidean or Mahalanobis distance can also be used. The final step is to compare the RDM to assess the extent to which different representations are alike (Aguirre, 2007; Kriegeskorte and Kievit, 2013). Depending on the cognitive states of interest, comparison of representational distance can detect different cognitive stimuli (Beaty et al., 2020), individual differences (Chen et al., 2020), neural plasticity (Fischer-Baum et al., 2017), disease abnormities (Cauda et al., 2014), longitudinal brain development (Schwartz et al., 2021), and even the representational alteration across time periods (Kobelt et al., 2021). An illustration of the technique is displayed in Figure 3.

FIGURE 3
www.frontiersin.org

Figure 3. Representational similarity analysis. Illustration of the steps of RSA for a simple design with six visual stimuli. (A) Stimuli (or, more generally, experimental conditions) are assumed to elicit brain representations of individual pieces of content (e.g., visual objects). Here, the representation of each item is visualized as a set of voxels (an fMRI region of interest) that are active to different degrees (black-to-red color scale). We compute the dissimilarity for each pair of stimuli, for example using 1–correlation across voxels. (B) The representational dissimilarity matrix (RDM) assembles the dissimilarities for all pairs of stimuli (blue-to-red color scale for small-to-large dissimilarities). The matrix can be used like a table to look up the dissimilarity between any two stimuli. The RDM is typically symmetric about a diagonal of zeros (white entries along the diagonal). RDMs can similarly be computed from stimulus descriptions (bottom left), from internal representations in computational models (bottom right), and from behavior (top right). By correlating RDMs (black double arrows), we can then assess to what extent the brain representation reflects stimulus properties, can be accounted for by different computational models, and is reflected in behavior. Adapted with permission from Kriegeskorte and Kievit (2013).

Like MVPA classification, although RSA was first applied in decoding brain patterns in individual neural regions, it has been applied in network activity across brain regions as well. Using this approach, RSA has shown robust findings comparing fine-grained cognitive states, e.g., (Beaty et al., 2018, 2020). RSA has even been suggested to be able to identify specific individuals. In a study conducted by Finn and colleagues (Finn et al., 2015), whole-brain functional networks were utilized to extract a “brain fingerprint,” a unique neural signature, for each individual. This signature is intrinsic and can be used to identify subjects regardless of which cognitive task the subject is performing. Brain fingerprinting may represent how different individuals process a variety of cognitive tasks, including the personalized strategies, habits, or normative behaviors (Tavor et al., 2016). Using RSA, Tavor and colleagues (Tavor et al., 2016) established whole-brain functional connectivity to represent one cognitive task for every individual subject. These functional networks were then associated with a single point in representational space, such that the researcher could map the distances of all the cognitive tasks’ FC network in RDM. Results suggested that points in the RDM were closest to all other points from the same subject, no matter which cognitive state those points were related to. Each individual could be identified using the RDM.

Representational similarity analysis on network analysis can also target more refined spatial patterns by analyzing regional networks (Cole et al., 2013; Mill et al., 2017). In another memory-related study, Xue and colleagues (Xue et al., 2010) found that the subsequently remembered faces and words showed greater representational similarity in neural networks across several brain regions. This result suggested that successful memory encoding occurs when the same neural representations are more precisely reactivated across time, rather than when patterns of activation are more variable across time (Xue et al., 2010).

Pros and Cons of Multi-Variant Pattern Analysis in Brain Networks

Instead of decoding brain patterns from activity within brain voxels or regions, using MVPA in brain networks allows cognitive states to be recognized by decoding whole-brain functional connectivity. By incorporating information from multiple connectivity networks, MVPA provides a more precise method for examining differences in small and nuanced neural activation patterns that cannot be detected using classical MVPA analysis in single brain regions. The scope of MVPA has been expanded; any techniques that manipulate the representation of distribution patterns in the brain could all be considered a variation of MVPA (Cohen, 2017). For example, RSA provides a new view for researchers to compare cognitive states in a relative way by comparing the relation between corresponding neural representations. MVPA approaches can also be used to decode dynamics in a network (Stokes et al., 2013; King and Dehaene, 2014). By evaluating the moment-to-moment variability of multivariate representations, insight into the timescale of task-related information in specific networks can be gained.

Setting up an optimal predictive model using MVPA and finding cognitively meaningful features needs to be further validated regarding neural networks. Due to the influx of advanced machine learning techniques, it is sometimes the case that researchers utilize these methods without understanding the mathematical processes behind them. Although these methods provide new resources to researchers, because of the automated nature, researchers may not be able to detect mistakes.

Real Data Example

Multi-Variant Pattern Analysis-Classification

Multi-variant pattern analysis using whole-brain network connectivity was utilized to test whether memory retrieval processes for correct and wrong performance feedback stimuli could be recognized based on network connectivity. Whole-brain phase-locking values (PLV), measuring connectivity between all brain regions, were used for each participant when they were shown the veridical correct and wrong feedback stimuli during a problem-solving task. The whole-brain network in memory retrieval tasks was used as one sample in the training dataset. Classifier training and testing were conducted using a leave-one-out cross-validation approach. Classifier training began with a feature selection approach, where t-tests, e.g., (Cohen et al., 2016), were conducted on every single functional connectivity between correct and wrong feedback classes across the whole-brain. Connectivity with p < 0.05 was selected as a feature to represent the most distinct connectivity between the two groups. This procedure was conducted on all frequency bands (theta, alpha, beta, and gamma). A support vector machine (SVM) based classifier was then trained to maximally separate the two cognitive states (correct vs. wrong feedback stimuli) based on the selected connectivity. The trained classifier was then applied to classify the leave-one-out participant using the same connectivity that was selected from the training dataset.

Results exhibited that overall, only 51.2% (1,000 permutation test: p = 0.774) of the cognitive states were accurately classified, which means the memory retrieval for correct and wrong performance feedback stimuli can only be recognized by chance. Results from the previous section (see details in CPM section) indicate that memory retrieval processes are different between correct and wrong performance feedback stimuli; however, by using MVPA, analyses could not find direct evidence explaining how the brain performs distinctly during the two processes. This may suggest that the memory retrieval process for correct and wrong performance feedback stimuli are subject to individual differences. These results also demonstrate how choosing the correct analytic technique is integral for neural analyses. CPM provided evidence that these retrieval processes differed, whereas, MVPA demonstrated no significant differences.

To test whether the whole-brain functional network-based MVPA could be an effective process for classification, we did the same analysis to classify accurate and inaccurate memory retrieval processes, regardless of the correct or wrong performance feedback stimuli. Specifically, each participant’s memory retrieval trials where the performance feedback stimuli were accurately remembered (either correct or wrong performance feedback stimuli) were collapsed to construct a whole-brain functional network. Trials where fonts were inaccurately remembered were collapsed to construct another network. MVPA classification with leave-one-out cross-validation was applied to classify each cognitive state. Results exhibited that overall, 77.6% (1000 permutation test: p = 0.013) of the cognitive states were correctly classified. Together, the results suggest that accurate memory retrieval and inaccurate memory retrieval are more easily classified using brain networks than specifically classifying the correct and wrong performance feedback stimuli retrieval tasks.

Multi-Variant Pattern Analysis-Representational Similarity Analysis

As stated in Xue et al. (2010), memory encoding is enhanced by reactivating the initial neural representation in each subsequent study episode, and pattern reinstatement can account for subsequent memory effects in both recall and recognition tests. We hypothesize that brain pattern similarity between memory encoding and recall would be associated with the participants’ memory accuracy scores. To test this, we first constructed a PLV-based adjacency matrix for each subject in their memory encoding process and another adjacency matrix in their memory recall process. Next, a linear Pearson’s correlation was conducted between the two adjacency matrices, obtaining a R value to indicate the representational similarity between them. After this, another linear regression analysis was conducted between the output R value and the memory score overall. Results suggested marginal effects where memory scores were positively correlated with the representational similarity between encoding and retrieval for subjects for theta, alpha and beta frequency bands (Theta, β = 0.31, F[1,70] = 3.54, R2 = 0.055, p = 0.065; Alpha: β = 0.30, F[1,70] = 3.37, R2 = 0.051, p = 0.071; Beta: β = 0.28, F[1,70] = 3.37, R2 = 0.052, p = 0.070). A significant relationship was found in the gamma band (Gamma: β = 0.31, F[1,70] = 4.18, R2 = 0.066, p = 0.043). Results bolster Xue and colleagues (Xue et al., 2010) findings by suggesting that the more similar neural activity is during memory encoding and retrieval, the more accurate participants memory scores are.

Brain Network Dynamics

Traditional analyses that test functional connectivity in the brain operate under the assumption that the activation remains constant throughout the length of the recording (Allen et al., 2012). However, the human brain operates in a more sophisticated manner. Its topological organization changes continuously, regardless of the cognitive process at hand (Tomescu et al., 2014; Chen et al., 2016; Vidaurre et al., 2018). These changes, which emerge over time scales spanning milliseconds to minutes, are non-random. Brain networks tend to exhibit a relatively stable status within a certain period of time, which is captured as a “brain state.” Thus, any brain process can be represented as a series of repeatedly emerging brain states that transition between one another in a temporally coordinated manner. Transitions between states are also non-random. Some of the brain states may play the role as a transition hub that temporally bridges other states together (Anderson et al., 2014; Taghia et al., 2018), or, a transition may only occur between specific groups of brain states (or metastates; Vidaurre et al., 2017), suggesting that brain state sequencing is temporally organized.

Brain states, and the transitions between states, occur as a result of numerous factors such as brain development, brain aging, or degenerative diseases (Wang et al., 2014). Thus, brain network dynamic analyses have been implemented in developmental work (Hutchison and Morton, 2015) and clinical diagnosis (Dadok, 2013; Ou et al., 2013; Wang et al., 2014). In cognitive tasks, there is also increasing evidence suggesting that human behavior is weighted by both spatially balanced topological patterns in brain networks and latent cognitive processes related to the networks. Different cognitive processes require the brain to change states. The brain must change from its default temporal processes, defined by a particular state, and transition to a brief, task-efficient state. Moreover, resting state conversions are also subject to individual differences. These individual differences demonstrate the value and richness in dynamic network analyses.

Network Dynamic Construction

The most widely used approach to characterize network dynamics are sliding window (or gradually tapered) correlations between regions of interest (Di and Biswal, 2013; Kucyi and Davis, 2014; Lindquist et al., 2014; Zalesky et al., 2014). Time series data representing neural activity is input into the sliding window analysis. Connectivity within the window is computed between each pair of time series truncated by the sliding window with a Pearson correlation coefficient. Pearson correlation coefficients are calculated continuously as the sliding window moves across the time series data. When connectivity from all windows is concatenated, a set of connectivity matrices – a dynamic functional connectome, representing the temporal evolution of whole-brain functional connectivity – is obtained (Calhoun et al., 2014; Preti et al., 2017). An illustration is provided in Figure 4.

FIGURE 4
www.frontiersin.org

Figure 4. Hidden Markov Modeling (HMM) network analysis (B) as opposed to sliding-window network analysis (A). Whereas the sliding window has a fixed width and ignores the data beyond its boundaries, the HMM automatically finds, across the entire data set, all the network occurrences that correspond to a given state, enhancing the robustness of the estimation (because it has more data than a window) and adapting to inherent network time in a data driven manner. In this example, the states themselves reflect unique spatial patterns of oscillatory envelopes and envelope couplings, that consistently repeat and different points in time. The non-marked segments of the data correspond to other states.

Dynamic network analyses need to be calculated in distinct ways for fMRI, EEG, and MEG methodologies. fMRI studies have a much slower time course due to the hemodynamic response in the brain. Thus, coherent oscillatory brain synchronization originating from underlying neuron activity at various frequency bands (Laufs et al., 2003; Buzsaki and Draguhn, 2004; Mantini et al., 2007) is limited. In this instance, dynamic network analyses may not accurately depict how neural states fluctuate in real time as fMRI cannot index these smaller timescales. On the other hand, methods such as EEG and MEG, which have a much higher temporal resolution in comparison to fMRI, are able to be used to estimate brain dynamics by incorporating not only cross-region temporal synchronization but also cross-region phase synchronization (Chang and Glover, 2010; Yaesoubi et al., 2015; Demirtaş et al., 2016). Cross-region and phase temporal synchronization are achieved by time-frequency analysis using short-time Fourier transformation coherence (STFT; Liu et al., 2017) or wavelet transformation coherence (Chiu et al., 2011).

Brain States

Once the dynamic functional connectivity is established, brain dynamics can be categorized into several brain states that reoccur over time. Clustering algorithms, such as k-means clustering introduced by Allen et al. (2012, 2014) used to be the most widely used method to obtain brain states (Damaraju et al., 2014; Hutchison et al., 2014; Rashid et al., 2014; Barttfeld et al., 2015; Gonzalez-Castillo et al., 2015; Hudetz et al., 2015; Marusak et al., 2016; Shakil et al., 2016; Su et al., 2016; Liu et al., 2021a). A limitation with clustering algorithms, however, is that they summarize brain patterns only based on the spatial distribution of brain connectivity. Alternatively, Hidden Markov Models (HMMs) can be used to provide robust modeling of rapidly changing functional network structure on rapid cognitive timescales. HMM clusters brain states by simultaneously incorporating how different brain states temporally link to one another, i.e., the highest probability of how brain states were sequenced (Rabiner et al., 1989). HMM has been used in multiple studies across a range of data modalities, including fMRI (Baldassano et al., 2017), EEG (Borst and Anderson, 2015), and MEG (Vidaurre et al., 2016, 2018). An illustration is provided in Figure 4.

The functional meaning of each brain state needs to be carefully interpreted to provide a more comprehensive explanation of a cognitive process based on sequences of brain states. In brain region-based analyses, a brain state is defined using the most significantly powered region or component, i.e., a single brain region or co-activated brain regions, that explains the largest variance across all the regions (Anderson et al., 2014). For example, in an executive function study (Liu et al., 2017), one of the brain states characterized by dominant power in fronto-polar cortex was defined as the state responsible for solving problems. This was because the fronto-polar region is thought to highly correlate with cognitive processes such as reasoning and working memory (Klingberg et al., 1997; Salazar et al., 2012; Darki and Klingberg, 2015). It was also found that the longer each participant spent in this state, the more accurately they could solve the problems. On the other hand, in brain states constructed using networks and the whole-brain, there are many functional connections involved. Therefore, it is difficult to say which connections are dominant. In this case, brain states may be defined using topological organizations within the network across the brain regions (Chen et al., 2016). For example, Vidaurre et al. (2018) define brain states by finding tightly connected functional modules, while Shine et al. (2016) define brain states according to the integration/segregation of the whole-brain network.

Brain States Features

The following five metrics are commonly assessed in brain states analyses: (1) frequency or proportional occupancy, measured as the proportion of all windows classified as instances of particular states, and computed separately for each state, (2) mean dwell time or mean lifetime, measured as the average number of consecutive windows classified as instances of the same state, (3) inter-transition interval, measured as the number of consecutive instances before a transition to the same state, (4) the number of transitions or the number of states, measured as the number of state transitions across certain conditions/individuals, which may represent the stability of whole-brain dynamics (more states means less stable and fewer states means more stable) (Liu et al., 2017), and (5) state transition probability, measured as the likelihood that a brain state at time instance t remained within the same brain state in the previous time t-1; this can be used to determine the state transition paths. For example, Taghia et al. (2018) indicated that the brain state that dominates the high-load working-memory condition does not suddenly shift to the state that dominates the fixation condition without first accessing a state associated with an intermediate cognitive demand.

Pros and Cons of Network Dynamics

Network dynamics describe neural data unlike any other previous analysis. Neural dynamics describe how neural activation can fluctuate over time on the order of milliseconds, providing insight into how the brain networks reorganize in a temporal manner in relation to cognitive processes and behaviors. Unfortunately, because of the complexity of network dynamics, results can take a long time to compute and the results may be ambiguous. For example, Hidden Markov models need to be trained on a set of seed sequences and generally require a larger seed. This type of training involves repeated iterations of the Viterbi algorithm which can be quite slow. Moreover, the brain states that result from Hidden Markov models are defined using neural networks that are usually based on connectivity between all pairs of brain regions available for analysis. Thus, this often results in a brain state that is difficult to define, i.e., the more connections within the brain state, the more the functional definition of the brain state becomes unclear, leading to ambiguous results. Computationally, many parameters can result in overfitting, and consequently, algorithms are often unable to segment time series data effectively (Vidaurre et al., 2018). To circumvent this problem, a commonly used approach to reduce features in modeling brain states is the principal component analysis (PCA; Anderson et al., 2014; Becker et al., 2014; Vidaurre et al., 2018). Another approach proposed by Vidaurre et al. (2017) is to apply Hidden Markov models on raw region level signals instead of connectivity level signals. After characterizing the brain states, networks are estimated by pooling all data corresponding to a specific brain state. Finally, as another alternative, instead of using all connectivity, some sub-networks of interest may be pre-defined, then a single matrix representing the graph-theoretical properties of the specific sub-network can be estimated (Liu et al., 2021a). In this way, brain states can be defined based on the activity of a small number of sub-networks.

Real Data Example

To construct dynamic functional connectivity, time series were extracted for all 68 sources in each trial, using MNE (Gramfort et al., 2013, 2014). Each memory retrieval trial was defined as the time between when the participants make their answer selections and the presentation of the performance feedback stimuli computer screen. Unlike the stationary functional connectivity calculated by phase locking, time-variant connectivity between all pairs of brain regions was generated using the spectral coherence analysis from every single EEG trial that was collected during the memory task. In other words, for each memory retrieval trial, we obtained a symmetric 68 × 68 connectivity score matrix for each time window. Each adjacent matrix was further pruned by applying a statistical threshold (Liu et al., 2011, 2013) to retain coefficients rij ≤ 30% of the total connections. We further optimized our model by examining activity over several smaller functional subnetworks (subsets of the entire matrix) relevant to understanding memory retrieval performance.

As discussed earlier, memory retrieval for correct fonts and wrong fronts may rely on different memory mechanisms and different brain networks. To simplify the analysis and avoid overfitting, three memory-related networks, namely the semantic memory network, the emotional memory network, and the episodic memory network, were extracted. Network strength was measured for every functional subnetwork from every time window in theta band. For every memory retrieval trial, we obtained three time-courses to represent the activity of the three functional subnetworks in terms of how closely the brain regions within each subnetwork communicate with one another compared to other nodes in the whole brain (Forbes et al., 2018) and we name it network strength.

To resolve dynamic network activity, we applied HMM to time-courses extracted from the network strength from the three functional networks. Results yielded 17 distinct brain states; the fractional occupancies (reflecting the proportion of time spent in each state) were measured for all states from every single memory retrieval trial. Then, the fractional occupancies were averaged across the correct and wrong performance feedback stimuli memory trials independently for each participant. The averaged fractional occupancies were then correlated with the memory accuracy scores (d-prime) for both correct and wrong performance feedback stimuli. Results indicated that only one of the brain states showed a significant correlation with the wrong performance feedback stimuli d-prime score (β = 2.79, F[1,70] = 5.23, R2 = 0.073, corrected p = 0.025). This state showed the highest activity or network strength in the emotional memory network, and lower activity, respectively, in the other two networks. Results suggest that the more time participants spent in this dominant state, emotional memory network, the higher the probability that they can accurately remember the wrong performance feedback stimuli from the problem-solving task. Results suggest further evidence that memory retrieval processes behind wrong performance feedback stimuli may be related to emotional processing.

Deep Learning in Brain Network

Deep learning, or deep neural networks (DNN), is a branch of a broader family of machine learning methods. These machine learning methods are based on traditional shallower artificial neural networks (ANN) (Lecun et al., 2015; Schmidhuber, 2015). DNN significantly increase the sensitivity of conventional machine learning methods by adding more layers between input and output than ANNs (hence “deep”). Multiple layers extract different levels of representations/abstractions from the sensory input (Hinton, 2010). Recently, deep learning has become revolutionary due to its success in clinical diagnosis (Khaligh-Razavi and Kriegeskorte, 2014; Yamins et al., 2014; Eickenberg et al., 2017), image processing (Kamnitsas et al., 2017; Zhao and Mukhopadhyay, 2018; Pinto et al., 2020) and behavior prediction (Plis et al., 2014; Van Der Burgh et al., 2017; Nguyen et al., 2018).

Brain networks, which have been widely used for exploring human brain organization and cognition (Smith et al., 2009; Assaf et al., 2010; Power et al., 2011; Yeo et al., 2011; Bertolero et al., 2015; Dubois and Adolphs, 2016; Reinen et al., 2018), unfortunately are not represented in the Euclidean grid. Instead, they are represented via graphs that depend on reciprocal relationships and similarities between pairs of brain regions. The complexity of graph data has brought significant difficulties to the application of existing DNN algorithms (Shuman et al., 2013). Recently, many studies that extend deep neural network approaches for graph data have emerged (Wu et al., 2019). The most important line of work is built on spectral graph theory (Bruna et al., 2013; Defferrard et al., 2016) which implements convolution through a complex Fourier transform on graphs. Another broad category of work relies on the graphs’ spatial information; the main idea is to generate a node’s features by aggregating its neighbors’ features, e.g., (Niepert et al., 2016; Veličković et al., 2017; Chen et al., 2018; see Figure 5 for an illustration).

FIGURE 5
www.frontiersin.org

Figure 5. Architecture of a CNN on graphs and the four ingredients of a (graph) convolutional layer.

Although some of the models mentioned above have been applied to brain studies (Cucurull et al., 2018; Duffy et al., 2019; McDaniel and Quinn, 2019). These works all define graphs as an architecture involving both edges and features over nodes. Importantly, they assume that the connectivity between all pairs of nodes is identical across samples (a single set of edge weights fixed for all samples). An illustration of such graph architecture can be seen in Figure 7. Additionally, a typical example of such architecture is the registered brain mesh on neural surfaces (Robbins et al., 2004; Liu et al., 2020). In contrast, the representation of whole-brain neural networks can be defined by edges between nodes, not features over nodes. In other words, the discrimination of brain networks for each sample lies in the connectivity strength and edge distribution between brain regions, but not the feature distribution over brain regions. Thus, the techniques adopted by most graph-based deep neural networks do not apply to actual brain network data (Kawahara et al., 2017). Therefore, to date, only a small number of studies have attempted to apply deep neural networks to brain connectivity data.

FIGURE 6
www.frontiersin.org

Figure 6. Schematic representation of the BrainNetCNN architecture. Each block represents the input and/or output of the numbered filter layers. The 3rd dimension of each block (i.e., along vector m) represents the number of feature maps, M, at that stage. The brain network adjacency matrix (leftmost block) is first convolved with one or more (two in this case) E2E filters which weight edges of adjacent brain regions. The response is convolved with an E2N filter which assigns each brain region a weighted sum of its edges. The N2G assigns a single response based on all the weighted nodes. Finally, fully connected (FC) layers reduce the number of features down to two output score predictions.

FIGURE 7
www.frontiersin.org

Figure 7. Estimation of single subject connectivity matrix and labeled graph representation. Pearson’s correlation is used to obtain a functional connectivity matrix from the raw fMRI time-series. After specifying the graph structure for all subjects, based on spatial or functional information, each row/column of the connectivity matrix serves as a signal for the corresponding node (node features). The common connectivity matrix used for all subjects can be established using the anatomical information, e.g., the spatial distance between brain regions, or the physiological information, e.g., the mean functional connectivity matrix among the training samples.

Fully Connected Neural Network

Fully connected neural network analyses input all connectivity in a given neural network as a vector (i.e., lower triangular entries of the matrices) into a fully connected deep neural network. In this case, the model outputs are the hypothesized behavioral and demographic variables. FCNN models equally manipulate all neural connectivity. In other words, they do not take information from other neighboring connections into consideration for training. Because other neighboring connections are not taken into consideration, it creates an unbiased model.

BrainNetCNN

Alternatively, the BrainNetCNN analyses take in connectivity matrices for input (Figure 6). Like FCNN analyses, BrainNetCNN analyses also output behavioral and demographic variables. BrainNetCNN consists of four types of layers: Edge-to-Edge (E2E), Edge-to-Node (E2N), Node- to-Graph (N2G), and a final fully connected (linear) layer. The first three types of layers are specially designed layers specific to BrainNetCNN. The final fully connected layer is the same as that used in FCNNs.

The Edge-to-Edge (E2E) layer is a convolution layer using cross-shaped filters (Figure 6) and can be considered a fundamental processing level to the sensory input. The E2N layer and N2G layer can be considered the higher level, or the abstract level, of input processing. Finally, the N2G layer outputs are linearly summed by the final fully connected layer to provide a final set of prediction values.

Graph Convolutional Neural Network

Another technique that allows researchers to utilize whole-brain networks in deep learning is to embed neural network data into a framework such as a spectral graph convolution network (Figure 7). The critical step of this approach is to construct a standard graph structure that is representative of all subjects, in addition to assigning a feature value for each of the nodes in the brain network to represent the network differences across samples. Two approaches are applied to construct a standard graph structure (Ktena et al., 2018; McDaniel and Quinn, 2019). The first approach is based on anatomical information, i.e., the connectivity established in the common graph representing the spatial distances between connected brain regions. For the second approach, the common graph’s connectivity is estimated as the mean functional connectivity matrix among the training samples. This kind of structure is more meaningful from a neuroscientific view because it reflects the average functional connection strength between pairs of brain regions within a sample. Feature values for each of the nodes was usually assigned by nodal properties defined in graph theory, such as nodal degree, or nodal clustering coefficients.

3D Connectome Convolutional Neural Network

Another deep learning technique is the use of a 3D connectome convolutional neural network (CNN). Khosla and colleagues (Khosla et al., 2019) preprocessed resting state fMRI data to extract the 3D spatial structure instead of only relying on each region’s averaged information. In this study, voxel-level maps were created by analyzing each voxel’s connectivity information with respect to the averaged value of each region of interest present in the selected atlas. Using this technique allowed a specific brain region’s connectivity strength to be mapped onto the whole 3D image. The number of channels is determined by the number of regions defined in the atlas used to segment the brain. Then the problem is resolved using the classic CNN approach.

Although scientists and engineers have attempted different ways to integrate whole-brain networks into the framework of deep neural networks, the predictive power of existing models is questioned by some researchers (He et al., 2018, 2020; Khosla et al., 2019, 2021; Raviprakash et al., 2019). He et al. (2018, 2020) compared the prediction results of human cognitive performance using brain DTI structural connectivity data between the deep graph convolutional network model and a simple kernel regression model. Results indicated that the graph based deep learning models did as well or worse than the kernel regression analysis, which is a much easier model for prediction. This suggests that although deep learning has been shown to be a promising tool for neuroimaging data analysis, much more work needs to be done to verify these models in network analysis. New designs are expected in future studies.

Pros and Cons of Deep Learning

Deep learning is the most technologically advanced method in neural data analysis. As soon as the first techniques were published, deep learning has drawn intensive attention from all scientific fields and has been very successful in some medical fields, such as medical image processing and clinical diagnosis. However, the application of deep learning approaches in cognitive neuroscience is relatively recent.

A typical deep learning model contains millions of parameters, which requires a large amount of data for training to achieve the target that researchers are interested in. This creates challenges for individual studies, where usually no more than hundreds of subjects were examined, which may not be enough to obtain a perfect model. “Big data” also creates challenges for data sharing and transparency.

Conventional deep learning approaches depend on the geometrical relevance (e.g., image voxels) of the variables within the searching field of the deep learning filters. Hence the graph-like data architectures, as represented in brain network, cannot be embedded directly into the deep learning frame. Some of the basic operations in deep learning, such as convolutions and pooling, are very difficult to realize in networks. The most widely used deep learning approaches on graphs rely on a spectral decomposition to accomplish the graph convolution. This decomposition is very ambiguous and does not provide straightforward physical meanings for interpretation. Furthermore, this approach does not focus on the topological representation (edges) of the individual network. Rather, it tends to map the feature difference in each individual node in accordance with the connections within the network. Feature representations on each node are not typically involved in a common brain network, and thus this approach cannot be precisely replicated in brain network studies. Several attempts have been made to accommodate graph deep learning to brain networks. However, these approaches appear to be either incapable to convert the connectivity and network values into truly meaningful information, or unable to provide evidence that they can describe the brain network in a proper way that relates to specific cognitive states. In summary, although deep learning has been proven to have great potential, we are still a long way away from using it to investigate whole-brain networks in an appropriate way.

Real Data Example

In the present memory study, a GCNN approach was applied to predict memory scores in the correct and wrong performance feedback stimuli trials, respectively. To construct a fixed set of edge weights across all the participants in the two memory retrieval conditions, two 68 × 68 standard whole-brain networks were generated by averaging all functional brain networks across all subjects’ whole-brain networks (across all frequency bands). Three graph theoretical feature values (nodal degree, nodal clustering coefficient and local efficiency) across 4 frequency bands (so in total 3 × 4 = 12 feature values) were assigned over each brain region for each subject.

Graph convolutional neural networks consider spectral convolutions on graphs defined as the multiplication of a signal with a filter in the Fourier domain. The signal h on the graph nodes is filtered by g as:

g*h=U(UTgUTh)

Where g is a non-parametric filter defined by the N-dimensional vector of graph Fourier coefficients, where N is the number of nodes in the graph (68 in this case). Using a non-parametric filter enables the receptive field of the filter to cover the entire graph at each layer. U is the Fourier basis of the graph Laplacian L, given by the eigendecomposition of L, i.e., L = UΛUT, Λ is the ordered real non-negative eigenvalue values vector of graph Fourier transform, * is the convolution operator and ⊙ denotes element-wise multiplication. The graph Laplacian L is defined as L: = D – W where the degree matrix D is a diagonal matrix whose ith diagonal element di is equal to the sum of the weights of all the edges connected to vertex i as Dii = ∑jWij; W is a weighted adjacency matrix encoding the connection between brain regions. After normalization, the graph Laplacian is defined as L = InD−1/2WD−1/2 where In is the identity matrix.

The output is a single value that represents the memory score for each font type. The input maps were vector-valued signals (twelve graph-theoretical features) on the graph nodes and an adjacency matrix given by the common graph architecture. The GCN used a GC8-P4-GC16-P4-FC512 architecture, where GCn is a non-parametric graph convolutional layer with n channels, P is a pooling layer and FC is a fully connected layer. Each layer is followed by ReLU non-linearity. Mean squared error (MSE) was used as the loss function with an Adam optimizer, a learning rate of 10–6, and an L2 regularization parameter of 10–8 and a batch size of 2.

Results using five-fold cross-validation indicated that prediction is poor for both correct (r = 0.19, p > 0.05) and wrong performance feedback stimuli (r = 0.21, p > 0.05) in memory score prediction, as suggested by past literature (Kawahara et al., 2017; He et al., 2020). The results suggested that even if we can construct a format to run the graph-based deep learning models on whole-brain networks, we are still far from understanding interpretations within whole-brain networks.

Conclusion

Attributing functional connectivity and brain network activation to mental representations is challenging both theoretically and statistically. Currently, cognitive neuroscience research focuses on investigating the activation of connectivity or brain networks between brain regions that are pre-selected via hypothesis-driven approaches. In these cases, the associated ROIs are usually determined based on evidence from a narrow selection of studies in past literature, or the researcher’s own limited knowledge (see the meta-analysis section in the Supplementary Materials). Recent technical and methodological advances have initiated new strategies to navigate the neural mechanisms underlying cognitive function using multi-level, both spatially and temporally, approaches. These approaches decrease the possibility of establishing a “biased” hypothesis due to restricted or incomplete understanding of specific cognitive functions, or the possibility of generating false-positive results due to the noisy brain network representations. Multi-level methods have also provided unique neural insight into individual differences in subjects. These individual differences can be used in both clinical and social psychological applications. From a clinical perspective, individualized treatments have become popular. If individual differences can be predicted from these neural methods, primarily data-driven techniques such as deep learning, personalized treatments can be implemented on a more regular basis.

The present review describes how novel neuroscience methodologies can begin to relate whole-brain networks to cognition and behavior, both in the aggregate and over time. Graph theory and connectome-based predictive modeling provide insight into how neural architecture can change in relation to various behaviors and cognitions. Meta-analytic techniques synthesize research, bringing the scientific community closer to identifying the exact function of regions and networks. MVPA uses an entirely data-driven approach to reveal nuances behind neural activity in relation to behavior that would not be able to be seen by typical neural analyses. Furthermore, novel techniques like network dynamic modeling and deep learning allow researchers to model neural activity in a more accurate manner. Considering the brain acts as a parallel processor, researchers need to consider neural activation both over time and in layers. Network dynamic modeling and deep learning techniques allow researchers to embark on these questions.

Overall, current technology helps researchers make sense of data by employing multi-level analyses that are more accurate in modeling the whole brain as it genuinely functions. By modeling the brain in this manner, not only are researchers advancing neuroscience research by creating more accurate neural models in relation to behavior and cognition, but they are also coming closer to creating more optimal treatment options for both clinical and social psychologists.

Author Contributions

ML and RA planned and wrote the manuscript and executed all analyses. CF provided data and edits. JS and RB provided comments and suggestions. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnhum.2022.875201/full#supplementary-material

References

Abraham, A., Pedregosa, F., Eickenberg, M., Gervais, P., Mueller, A., Kossaifi, J., et al. (2014). Machine learning for neuroimaging with scikit-learn. Front. Neuroinform. 8:14. doi: 10.3389/fninf.2014.00014

PubMed Abstract | CrossRef Full Text | Google Scholar

Aguirre, G. K. (2007). Continuous carry-over designs for fMRI. Neuroimage 35, 1480–1494. doi: 10.1016/j.neuroimage.2007.02.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Allen, E. A., Damaraju, E., Plis, S. M., Erhardt, E. B., Eichele, T., and Calhoun, V. D. (2014). Tracking whole-brain connectivity dynamics in the resting state. Cerebr. Cortex 24, 663–676.

Google Scholar

Allen, E. A., Erhardt, E. B., Wei, Y. H., Eichele, T., and Calhoun, V. D. (2012). Capturing inter-subject variability with group independent component analysis of fMRI data: a simulation study. Neuroimage 59, 4141–4159. doi: 10.1016/j.neuroimage.2011.10.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Anderson, J. R., Lee, H. S., and Fincham, J. M. (2014). Discovering the structure of mathematical problem solving. Neuroimage 97, 163–177. doi: 10.1016/j.neuroimage.2014.04.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Anzellotti, S., and Coutanche, M. N. (2018). Beyond functional connectivity: investigating networks of multivariate representations. Trends Cogn. Sci. 22, 258–269. doi: 10.1016/j.tics.2017.12.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Assaf, M., Jagannathan, K., Calhoun, V. D., Miller, L., Stevens, M. C., Sahl, R., et al. (2010). Abnormal functional connectivity of default mode sub-networks in autism spectrum disorder patients. Neuroimage 53, 247–256. doi: 10.1016/j.neuroimage.2010.05.067

PubMed Abstract | CrossRef Full Text | Google Scholar

Attal, Y., and Schwartz, D. (2013). Assessment of subcortical source localization using deep brain activity imaging model with minimum norm operators: a MEG study. PLoS One 8:e59856. doi: 10.1371/journal.pone.0059856

PubMed Abstract | CrossRef Full Text | Google Scholar

Baldassano, C., Chen, J., Zadbood, A., Pillow, J. W., Hasson, U., and Norman, K. A. (2017). Discovering event structure in continuous narrative perception and memory. Neuron 95, 709–721. doi: 10.1016/j.neuron.2017.06.041

PubMed Abstract | CrossRef Full Text | Google Scholar

Barch, D. M., Burgess, G. C., Harms, M. P., Petersen, S. E., Schlaggar, B. L., Corbetta, M., et al. (2013). Function in the human connectome: task-fMRI and individual differences in behavior. Neuroimage 80, 169–189. doi: 10.1016/j.neuroimage.2013.05.033

PubMed Abstract | CrossRef Full Text | Google Scholar

Barttfeld, P., Uhrig, L., Sitt, J. D., Sigman, M., Jarraya, B., and Dehaene, S. (2015). Signature of consciousness in the dynamics of resting-state brain activity (vol 112, pg 887, 2015). Proc. Natl. Acad. Sci. U.S.A. 112, E5219–E5220. doi: 10.1073/pnas.1418031112

PubMed Abstract | CrossRef Full Text | Google Scholar

Bassett, D. S., Bullmore, E. T., Meyer-Lindenberg, A., Apud, J. A., Weinberger, D. R., and Coppola, R. (2009). Cognitive fitness of cost-efficient brain functional networks. Proc. Natl. Acad. Sci. U.S.A. 106, 11747–11752. doi: 10.1073/pnas.0903641106

PubMed Abstract | CrossRef Full Text | Google Scholar

Beaty, R. E., Chen, Q., Christensen, A. P., Kenett, Y. N., Silvia, P. J., Benedek, M., et al. (2020). Default network contributions to episodic and semantic processing during divergent creative thinking: a representational similarity analysis. NeuroImage 209:116499. doi: 10.1016/j.neuroimage.2019.116499

PubMed Abstract | CrossRef Full Text | Google Scholar

Beaty, R. E., Kenett, Y. N., Christensen, A. P., Rosenberg, M. D., Benedek, M., Chen, Q. L., et al. (2018). Robust prediction of individual creative ability from brain functional connectivity. Proc. Natl. Acad. Sci. U.S.A. 115, 1087–1092. doi: 10.1073/pnas.1713532115

PubMed Abstract | CrossRef Full Text | Google Scholar

Becker, M. P., Nitsch, A. M., Miltner, W. H., and Straube, T. (2014). A single-trial estimation of the feedback-related negativity and its relation to BOLD responses in a time-estimation task. J. Neurosci. 34, 3005–3012. doi: 10.1523/JNEUROSCI.3684-13.2014

PubMed Abstract | CrossRef Full Text | Google Scholar

Bertolero, M. A., Yeo, B. T. T., and D’esposito, M. (2015). The modular and integrative functional architecture of the human brain. Proc. Natl. Acad. Sci. U.S.A. 112, E6798–E6807. doi: 10.1073/pnas.1510619112

PubMed Abstract | CrossRef Full Text | Google Scholar

Borst, J. P., and Anderson, J. R. (2015). The discovery of processing stages: analyzing EEG data with hidden semi-Markov models. Neuroimage 108, 60–73. doi: 10.1016/j.neuroimage.2014.12.029

PubMed Abstract | CrossRef Full Text | Google Scholar

Boutet, A., Madhavan, R., Elias, G. J., Joel, S. E., Gramer, R., Ranjan, M., et al. (2021). Predicting optimal deep brain stimulation parameters for Parkinson’s disease using functional MRI and machine learning. Nat. Commun. 12, 1–13. doi: 10.1038/s41467-021-23311-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Bruna, J., Zaremba, W., Szlam, A., and Lecun, Y. (2013). Spectral networks and locally connected networks on graphs. arXiv [Preprint]. Available online at: https://arxiv.org/abs/1312.6203. (accessed December 2020).

Google Scholar

Bullmore, E., and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10:186. doi: 10.1038/nrn2575

PubMed Abstract | CrossRef Full Text | Google Scholar

Bullmore, E. T., and Bassett, D. S. (2011). Brain graphs: graphical models of the human brain connectome. Annu. Rev. Clin. Psychol. 7, 113–140. doi: 10.1146/annurev-clinpsy-040510-143934

PubMed Abstract | CrossRef Full Text | Google Scholar

Buzsaki, G., and Draguhn, A. (2004). Neuronal oscillations in cortical networks. Science 304, 1926–1929.

Google Scholar

Calhoun, V. D., Miller, R., Pearlson, G., and Adali, T. (2014). The chronnectome: time-varying connectivity networks as the next frontier in fMRI data discovery. Neuron 84, 262–274. doi: 10.1016/j.neuron.2014.10.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Cauda, F., Costa, T., Palermo, S., D’agata, F., Diano, M., Bianco, F., et al. (2014). Concordance of white matter and gray matter abnormalities in autism spectrum disorders: a voxel-based meta-analysis study. Hum. Brain Mapping 35, 2073–2098. doi: 10.1002/hbm.22313

PubMed Abstract | CrossRef Full Text | Google Scholar

Chang, C., and Glover, G. H. (2010). Time-frequency dynamics of resting-state brain connectivity measured with fMRI. Neuroimage 50, 81–98. doi: 10.1016/j.neuroimage.2009.12.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, H., Duan, X. J., Liu, F., Lu, F. M., Ma, X. J., Zhang, Y. X., et al. (2016). Multivariate classification of autism spectrum disorder using frequency-specific resting-state functional connectivity-A multi-center study. Prog. Neuro Psychopharmacol. Biol. Psychiatry 64, 1–9. doi: 10.1016/j.pnpbp.2015.06.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, J., Ma, T., and Xiao, C. (2018). Fastgcn: fast learning with graph convolutional networks via importance sampling. arXiv [Preprint]. Available online at: https://arxiv.org/abs/1801.10247. (accessed December 2020).

Google Scholar

Chen, P.-H. A., Jolly, E., Cheong, J. H., and Chang, L. J. (2020). Intersubject representational similarity analysis reveals individual variations in affective experience when watching erotic movies. NeuroImage 216:116851. doi: 10.1016/j.neuroimage.2020.116851

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, S., Langley, J., Chen, X., and Hu, X. (2016). Spatiotemporal modeling of brain dynamics using resting-state functional magnetic resonance imaging with Gaussian hidden Markov model. Brain Connect. 6, 326–334. doi: 10.1089/brain.2015.0398

PubMed Abstract | CrossRef Full Text | Google Scholar

Cheng, Y., Shen, W., Xu, J., Amey, R. C., Huang, L. X., Zhang, X. D., et al. (2021). Neuromarkers from whole-brain functional connectivity reveal the cognitive recovery scheme for overt hepatic encephalopathy after liver transplantation. eNeuro 8:ENEURO.0114-21.2021. doi: 10.1523/ENEURO.0114-21.2021

PubMed Abstract | CrossRef Full Text | Google Scholar

Chiu, A. W., Derchansky, M., Cotic, M., Carlen, P. L., Turner, S. O., and Bardakjian, B. L. (2011). Wavelet-based Gaussian-mixture hidden Markov model for the detection of multistage seizure dynamics: a proof-of-concept study. Biomed. Eng. Online 10, 1–25. doi: 10.1186/1475-925X-10-29

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, A. D., Chen, Z., Parker Jones, O., Niu, C., and Wang, Y. (2020). Regression-based machine-learning approaches to predict task activation using resting-state fMRI. Hum. Brain Mapp. 41, 815–826. doi: 10.1002/hbm.24841

PubMed Abstract | CrossRef Full Text | Google Scholar

Cohen, M. B., Kelner, J., Peebles, J., Peng, R., Sidford, A., and Vladu, A. (2016). “Faster algorithms for computing the stationary distribution, simulating random walks, and more,” in Proceedings of the 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS), (Piscataway, NJ: IEEE), 583–592.

Google Scholar

Cohen, M. X. (2017). MATLAB for Brain and Cognitive Scientists. Cambridge, MA: MIT Press.

Google Scholar

Cole, M. W., Reynolds, J. R., Power, J. D., Repovs, G., Anticevic, A., and Braver, T. S. (2013). Multi-task connectivity reveals flexible hubs for adaptive task control. Nat. Neurosci. 16, 1348–U1247. doi: 10.1038/nn.3470

PubMed Abstract | CrossRef Full Text | Google Scholar

Crossley, N. A., Fox, P. T., and Bullmore, E. T. (2016). Meta-connectomics: human brain network and connectivity meta-analyses. Psychol. Med. 46, 897–907. doi: 10.1017/S0033291715002895

PubMed Abstract | CrossRef Full Text | Google Scholar

Crossley, N. A., Mechelli, A., Vertes, P. E., Winton-Brown, T. T., Patel, A. X., Ginestet, C. E., et al. (2013). Cognitive relevance of the community structure of the human brain functional coactivation network. Proc. Natl. Acad. Sci. U.S.A. 110, 11583–11588. doi: 10.1073/pnas.1220826110

PubMed Abstract | CrossRef Full Text | Google Scholar

Cucurull, G., Wagstyl, K., Casanova, A., Veličković, P., Jakobsen, E., Drozdzal, M., et al. (2018). Convolutional Neural Networks for Mesh-Based Parcellation of the Cerebral Cortex. Cambridge, MA: Meta.

Google Scholar

Dadok, V. M. (2013). Probabilistic Approaches for Tracking Physiological States in the Cortex Through Sleep And Seizures. Berkeley, CA: University of California.

Google Scholar

Dale, A. M., Fischl, B., and Sereno, M. I. (1999). Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 9, 179–194. doi: 10.1006/nimg.1998.0395

PubMed Abstract | CrossRef Full Text | Google Scholar

Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. W., Lewine, J. D., et al. (2000). Dynamic statistical parametric mapping: combining fMRI and MEG for high-resolution imaging of cortical activity. Neuron 26, 55–67. doi: 10.1016/s0896-6273(00)81138-1

CrossRef Full Text | Google Scholar

Damaraju, E., Allen, E. A., Belger, A., Ford, J. M., Mcewen, S., Mathalon, D. H., et al. (2014). Dynamic functional connectivity analysis reveals transient states of dysconnectivity in schizophrenia. Neuroimage Clin. 5, 298–308.

Google Scholar

Darki, F., and Klingberg, T. (2015). The role of fronto-parietal and fronto-striatal networks in the development of working memory: a longitudinal study. Cerebr. Cortex 25, 1587–1595. doi: 10.1093/cercor/bht352

PubMed Abstract | CrossRef Full Text | Google Scholar

De La Vega, A., Chang, L. J., Banich, M. T., Wager, T. D., and Yarkoni, T. (2016). Large-scale meta-analysis of human medial frontal cortex reveals tripartite functional organization. J. Neurosci. 36, 6553–6562.

Google Scholar

Defferrard, M., Bresson, X., and Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. Adv. Neural Inform. Process. Syst. 29, 3844–3852.

Google Scholar

Demirtaş, M., Tornador, C., Falcón, C., López-Solà, M., Hernández-Ribas, R., Pujol, J., et al. (2016). Dynamic functional connectivity reveals altered variability in functional connectivity among patients with major depressive disorder. Hum. Brain Mapp. 37, 2918–2930. doi: 10.1002/hbm.23215

PubMed Abstract | CrossRef Full Text | Google Scholar

Desikan, R. S., Ségonne, F., Fischl, B., Quinn, B. T., Dickerson, B. C., Blacker, D., et al. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980. doi: 10.1016/j.neuroimage.2006.01.021

PubMed Abstract | CrossRef Full Text | Google Scholar

Di, X., and Biswal, B. B. (2013). Modulatory interactions of resting-state brain functional connectivity. PLoS One 8:e71163. doi: 10.1371/journal.pone.0071163

PubMed Abstract | CrossRef Full Text | Google Scholar

Diedrichsen, J., and Kriegeskorte, N. (2017). Representational models: a common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS Comput. Biol. 13:e1005508. doi: 10.1371/journal.pcbi.1005508

PubMed Abstract | CrossRef Full Text | Google Scholar

Dimsdale-Zucker, H. R., and Ranganath, C. (2018). “Representational similarity analyses: a practical guide for functional MRI applications,” in Handbook of Behavioral Neuroscience, eds C. P. Muller and K. A. Cunningham (Amsterdam: Elsevier), 509–525.

Google Scholar

Dosenbach, N. U., Nardos, B., Cohen, A. L., Fair, D. A., Power, J. D., Church, J. A., et al. (2010). Prediction of individual brain maturity using fMRI. Science 329, 1358–1361. doi: 10.1126/science.1194144

PubMed Abstract | CrossRef Full Text | Google Scholar

Dubois, J., and Adolphs, R. (2016). Building a science of individual differences from fMRI. Trends Cogn. Sci. 20, 425–443. doi: 10.1016/j.tics.2016.03.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Duffy, B. A., Liu, M., Flynn, T., Toga, A. W., Barkovich, A. J., Xu, D., et al. (2019). “Regression activation mapping on the cortical surface using graph convolutional networks,” in Proceedings of the International Conference on Medical Imaging with Deep Learning–Extended Abstract Track, (Piscataway, NJ: IEEE). doi: 10.1016/j.compmedimag.2021.101939

PubMed Abstract | CrossRef Full Text | Google Scholar

Eickenberg, M., Gramfort, A., Varoquaux, G., and Thirion, B. (2017). Seeing it all: convolutional network layers map the function of the human visual system. NeuroImage 152, 184–194. doi: 10.1016/j.neuroimage.2016.10.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Emerson, R. W., Adams, C., Nishino, T., Hazlett, H. C., Wolff, J. J., Zwaigenbaum, L., et al. (2017). Functional neuroimaging of high-risk 6-month-old infants predicts a diagnosis of autism at 24 months of age. Sci. Transl. Med. 9:eaag2882. doi: 10.1126/scitranslmed.aag2882

PubMed Abstract | CrossRef Full Text | Google Scholar

Finn, E. S., Shen, X., Scheinost, D., Rosenberg, M. D., Huang, J., Chun, M. M., et al. (2015). Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nat. Neurosci. 18, 1664–1671. doi: 10.1038/nn.4135

PubMed Abstract | CrossRef Full Text | Google Scholar

Fischer-Baum, S., Jang, A., and Kajander, D. (2017). The cognitive neuroplasticity of reading recovery following chronic stroke: a representational similarity analysis approach. Neural Plast. 2017:2761913. doi: 10.1155/2017/2761913

PubMed Abstract | CrossRef Full Text | Google Scholar

Forbes, C. E., Amey, R., Magerman, A. B., Duran, K., and Liu, M. (2018). Stereotype-based stressors facilitate emotional memory neural network connectivity and encoding of negative information to degrade math self-perceptions among women. Soc. Cogn. Affect. Neurosci. 13, 719–740. doi: 10.1093/scan/nsy043

PubMed Abstract | CrossRef Full Text | Google Scholar

Forbes, C. E., Duran, K. A., Leitner, J. B., and Magerman, A. (2015). Stereotype threatening contexts enhance encoding of negative feedback to engender underperformance and anxiety. Soc. Cogn. 33, 605–625.

Google Scholar

Fox, P. T., Lancaster, J. L., Laird, A. R., and Eickhoff, S. B. (2014). Meta-analysis in human neuroimaging: computational modeling of large-scale databases. Annu. Rev. Neurosci. 37, 409–434. doi: 10.1146/annurev-neuro-062012-170320

PubMed Abstract | CrossRef Full Text | Google Scholar

Freeman, J. B., Stolier, R. M., Brooks, J. A., and Stillerman, B. S. (2018). The neural representational geometry of social perception. Curr. Opin. Psychol. 24, 83–91. doi: 10.1016/j.copsyc.2018.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Girvan, M., and Newman, M. E. (2002). Community structure in social and biological networks. Proc. Natl. Acad. Sci. U.S.A. 99, 7821–7826. doi: 10.1073/pnas.122653799

PubMed Abstract | CrossRef Full Text | Google Scholar

Gonzalez-Castillo, J., Hoy, C. W., Handwerker, D. A., Robinson, M. E., Buchanan, L. C., Saad, Z. S., et al. (2015). Tracking ongoing cognition in individuals using brief, whole-brain functional connectivity patterns. Proc. Natl. Acad. Sci. U.S.A. 112, 8762–8767. doi: 10.1073/pnas.1501242112

PubMed Abstract | CrossRef Full Text | Google Scholar

Gorgolewski, K. J., Varoquaux, G., Rivera, G., Schwarz, Y., Ghosh, S. S., Maumet, C., et al. (2015). NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Front. Neuroinform. 9:8. doi: 10.3389/fninf.2015.00008

PubMed Abstract | CrossRef Full Text | Google Scholar

Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., et al. (2013). MEG and EEG data analysis with MNE-Python. Front. Neurosci. 7:267. doi: 10.3389/fnins.2013.00267

PubMed Abstract | CrossRef Full Text | Google Scholar

Gramfort, A., Luessi, M., Larson, E., Engemann, D. A., Strohmeier, D., Brodbeck, C., et al. (2014). MNE software for processing MEG and EEG data. Neuroimage 86, 446–460. doi: 10.1016/j.neuroimage.2013.10.027

PubMed Abstract | CrossRef Full Text | Google Scholar

Guimera, R., and Amaral, L. A. N. (2005). Cartography of complex networks: modules and universal roles. J. Stat. Mech. Theory Exp. 2005:02001. doi: 10.1088/1742-5468/2005/02/P02001

PubMed Abstract | CrossRef Full Text | Google Scholar

Hamalainen, M. S., and Sarvas, J. (1989). Realistic conductivity geometry model of the human head for interpretation of neuromagnetic data. IEEE Trans. Biomed. Eng. 36, 165–171. doi: 10.1109/10.16463

CrossRef Full Text | Google Scholar

Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., and Pietrini, P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430.

Google Scholar

He, T., Kong, R., Holmes, A. J., Nguyen, M., Sabuncu, M. R., Eickhoff, S. B., et al. (2020). Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behavior and demographics. NeuroImage 206:116276. doi: 10.1016/j.neuroimage.2019.116276

PubMed Abstract | CrossRef Full Text | Google Scholar

He, T., Kong, R., Holmes, A. J., Sabuncu, M. R., Eickhoff, S. B., Bzdok, D., et al. (2018). “Is deep learning better than kernel regression for functional connectivity prediction of fluid intelligence?,” in Proceedings of the 2018 International Workshop on Pattern Recognition in Neuroimaging (PRNI), (Piscataway, NJ: IEEE), 1–4.

Google Scholar

Hinton, G. E. (2010). Learning to represent visual input. Philos. Trans. R. Soc. B Biol. Sci. 365, 177–184. doi: 10.1098/rstb.2009.0200

PubMed Abstract | CrossRef Full Text | Google Scholar

Honey, C. J., Sporns, O., Cammoun, L., Gigandet, X., Thiran, J. P., Meuli, R., et al. (2009). Predicting human resting-state functional connectivity from structural connectivity. Proc. Natl. Acad. Sci. U.S.A. 106, 2035–2040. doi: 10.1073/pnas.0811168106

PubMed Abstract | CrossRef Full Text | Google Scholar

Hudetz, A. G., Liu, X., and Pillay, S. (2015). Dynamic repertoire of intrinsic brain states is reduced in propofol-induced unconsciousness. Brain Connect. 5, 10–22. doi: 10.1089/brain.2014.0230

PubMed Abstract | CrossRef Full Text | Google Scholar

Hutchison, R. M., Hutchison, M., Manning, K. Y., Menon, R. S., and Everling, S. (2014). Isoflurane induces dose-dependent alterations in the cortical connectivity profiles and dynamic properties of the brain’s functional architecture. Hum. Brain Mapp. 35, 5754–5775. doi: 10.1002/hbm.22583

PubMed Abstract | CrossRef Full Text | Google Scholar

Hutchison, R. M., and Morton, J. B. (2015). Tracking the brain’s functional coupling dynamics over development. J. Neurosci. 35, 6849–6859. doi: 10.1523/JNEUROSCI.4638-14.2015

PubMed Abstract | CrossRef Full Text | Google Scholar

Iturria-Medina, Y., Sotero, R. C., Canales-Rodriguez, E. J., Aleman-Gomez, Y., and Melie-Garcia, L. (2008). Studying the human brain anatomical network via diffusion-weighted MRI and graph theory. Neuroimage 40, 1064–1076. doi: 10.1016/j.neuroimage.2007.10.060

PubMed Abstract | CrossRef Full Text | Google Scholar

Kamnitsas, K., Bai, W., Ferrante, E., Mcdonagh, S., Sinclair, M., Pawlowski, N., et al. (2017). “Ensembles of multiple models and architectures for robust brain tumour segmentation,” in Proceedings of the International MICCAI Brainlesion Workshop, (Cham: Springer), 450–462.

Google Scholar

Kawahara, J., Brown, C. J., Miller, S. P., Booth, B. G., Chau, V., Grunau, R. E., et al. (2017). BrainNetCNN: convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage 146, 1038–1049. doi: 10.1016/j.neuroimage.2016.09.046

PubMed Abstract | CrossRef Full Text | Google Scholar

Khaligh-Razavi, S.-M., and Kriegeskorte, N. (2014). Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Comput. Biol. 10:e1003915. doi: 10.1371/journal.pcbi.1003915

PubMed Abstract | CrossRef Full Text | Google Scholar

Khosla, M., Jamison, K., Ngo, G. H., Kuceyeski, A., and Sabuncu, M. R. (2019). Machine learning in resting-state fMRI analysis. Magn. Reson. Imaging 64, 101–121.

Google Scholar

Khosla, M., Ngo, G. H., Jamison, K., Kuceyeski, A., and Sabuncu, M. R. (2021). Cortical response to naturalistic stimuli is largely predictable with deep neural networks. Sci. Adv. 7:eabe7547. doi: 10.1126/sciadv.abe7547

PubMed Abstract | CrossRef Full Text | Google Scholar

Kim, S. Y., Liu, M., Hong, S. J., Toga, A. W., Barkovich, A. J., Xu, D., et al. (2020). Disruption and compensation of sulcation-based covariance networks in neonatal brain growth after perinatal injury. Cereb. Cortex 30, 6238–6253. doi: 10.1093/cercor/bhaa181

PubMed Abstract | CrossRef Full Text | Google Scholar

King, J. R., and Dehaene, S. (2014). Characterizing the dynamics of mental representations: the temporal generalization method. Trends Cogn. Sci. 18, 203–210.

Google Scholar

Klingberg, T., O’sullivan, B. T., and Roland, P. E. (1997). Bilateral activation of fronto-parietal networks by incrementing demand in a working memory task. Cereb. Cortex 7, 465–471. doi: 10.1093/cercor/7.5.465

PubMed Abstract | CrossRef Full Text | Google Scholar

Kobelt, M., Sommer, V. R., Keresztes, A., Werkle-Bergner, M., and Sander, M. C. (2021). Tracking age differences in neural distinctiveness across representational levels. J. Neurosci. 41, 3499–3511. doi: 10.1523/JNEUROSCI.2038-20.2021

PubMed Abstract | CrossRef Full Text | Google Scholar

Kriegeskorte, N., and Kievit, R. A. (2013). Representational geometry: integrating cognition, computation, and the brain. Trends Cogn. Sci. 17, 401–412. doi: 10.1016/j.tics.2013.06.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Kriegeskorte, N., Mur, M., and Bandettini, P. (2008). Representational similarity analysis - connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2:4. doi: 10.3389/neuro.06.004.2008

PubMed Abstract | CrossRef Full Text | Google Scholar

Ktena, S. I., Parisot, S., Ferrante, E., Rajchl, M., Lee, M., Glocker, B., et al. (2018). Metric learning with spectral graph convolutions on brain connectivity networks. Neuroimage 169, 431–442. doi: 10.1016/j.neuroimage.2017.12.052

PubMed Abstract | CrossRef Full Text | Google Scholar

Kucyi, A., and Davis, K. D. (2014). Dynamic functional connectivity of the default mode network tracks daydreaming. Neuroimage 100, 471–480. doi: 10.1016/j.neuroimage.2014.06.044

PubMed Abstract | CrossRef Full Text | Google Scholar

Lachaux, J. P., Rodriguez, E., Martinerie, J., and Varela, F. J. (1999). Measuring phase synchrony in brain signals. Hum. Brain Mapp. 8, 194–208.

Google Scholar

Laird, A. R., Eickhoff, S. B., Rottschy, C., Bzdok, D., Ray, K. L., and Fox, P. T. (2013). Networks of task co-activations. Neuroimage 80, 505–514. doi: 10.1016/j.neuroimage.2013.04.073

PubMed Abstract | CrossRef Full Text | Google Scholar

Laird, A. R., Lancaster, J. L., and Fox, P. T. (2005). BrainMap: the social evolution of a human brain mapping database. Neuroinformatics 3, 65–78. doi: 10.1385/ni:3:1:065

CrossRef Full Text | Google Scholar

Langdon, A. J., and Chaudhuri, R. (2021). An evolving perspective on the dynamic brain: Notes from the Brain Conference on Dynamics of the brain: Temporal aspects of computation. Eur. J. Neurosci., 53, 3511.

Google Scholar

Laufs, H., Krakow, K., Sterzer, P., Eger, E., Beyerle, A., Salek-Haddadi, A., et al. (2003). Electroencephalographic signatures of attentional and cognitive default modes in spontaneous brain activity fluctuations at rest. Proc. Natl. Acad. Sci. U.S.A. 100, 11053–11058. doi: 10.1073/pnas.1831638100

PubMed Abstract | CrossRef Full Text | Google Scholar

Lecun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444.

Google Scholar

Li, C., Zheng, J., Wang, J., Gui, L., and Li, C. (2009). An fMRI stroop task study of prefrontal cortical function in normal aging, mild cognitive impairment, and Alzheimer’s disease. Curr. Alzheimer Res. 6, 525–530. doi: 10.2174/156720509790147142

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, H., Satterthwaite, T. D., and Fan, Y. (2018). “Brain age prediction based on resting-state functional connectivity patterns using convolutional neural networks,” in Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), (Piscataway, NJ: IEEE), 101–104. doi: 10.1109/ISBI.2018.8363532

PubMed Abstract | CrossRef Full Text | Google Scholar

Lieberman, M. D., Burns, S. M., Torre, J. B., and Eisenberger, N. I. (2016). Reply to Wager, et al.: pain and the dACC: the importance of hit rate-adjusted effects and posterior probabilities with fair priors. Proc. Natl. Acad. Sci. U.S.A. 113, E2476–E2479. doi: 10.1073/pnas.1603186113

PubMed Abstract | CrossRef Full Text | Google Scholar

Lieberman, M. D., and Eisenberger, N. I. (2015). The dorsal anterior cingulate cortex is selective for pain: results from large-scale reverse inference. Proc. Natl. Acad. Sci. U.S.A. 112, 15250–15255. doi: 10.1073/pnas.1515083112

PubMed Abstract | CrossRef Full Text | Google Scholar

Lin, F. H., Belliveau, J. W., Dale, A. M., and Hämäläinen, M. S. (2006). Distributed current estimates using cortical orientation constraints. Hum. Brain Mapp. 27, 1–13. doi: 10.1002/hbm.20155

PubMed Abstract | CrossRef Full Text | Google Scholar

Lindquist, M. A., Xu, Y., Nebel, M. B., and Caffo, B. S. (2014). Evaluating dynamic bivariate correlations in resting-state fMRI: a comparison study and a new approach. Neuroimage 101, 531–546. doi: 10.1016/j.neuroimage.2014.06.052

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, M., Amey, R. C., and Forbes, C. E. (2017). On the role of situational stressors in the disruption of global neural network stability during problem solving. J. Cogn. Neurosci. 29, 2037–2053. doi: 10.1162/jocn_a_01178

CrossRef Full Text | Google Scholar

Liu, M., Backer, R. A., Amey, R. C., and Forbes, C. E. (2021a). How the brain negotiates divergent executive processing demands: evidence of network reorganization in fleeting brain states. NeuroImage 245:118653. doi: 10.1016/j.neuroimage.2021.118653

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, M., Backer, R. A., Amey, R. C., Splan, E. E., Magerman, A., and Forbes, C. E. (2021b). Context matters: situational stress impedes functional reorganization of intrinsic brain connectivity during problem-solving. Cereb. Cortex 31, 2111–2124. doi: 10.1093/cercor/bhaa349

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, M., Duffy, B. A., Sun, Z., Toga, A. W., Barkovich, A. J., Xu, D., et al. (2020). “Deep learning of cortical surface features using graph-convolution predicts neonatal brain age and neurodevelopmental outcome,” in Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), (Piscataway, NJ: IEEE), 1335–1338.

Google Scholar

Liu, M., Kuo, C. C., and Chiu, A. W. (2011). Statistical threshold for nonlinear Granger Causality in motor intention analysis. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2011, 5036–5039. doi: 10.1109/IEMBS.2011.6091247

PubMed Abstract | CrossRef Full Text | Google Scholar

Liu, M. T., Kuo, C.-C., and Chiu, A. W. (2013). Non-linear Granger causality and its frequency decomposition in decoding human upper limb movement intentions. Int. J. Biomed. Eng. Technol. 34, 1–25.

Google Scholar

Lorenz, R., Hampshire, A., and Leech, R. (2017). Neuroadaptive bayesian optimization and hypothesis testing. Trends Cogn. Sci. 21, 155–167. doi: 10.1016/j.tics.2017.01.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorenz, R., Monti, R. P., Violante, I. R., Anagnostopoulos, C., Faisal, A. A., Montana, G., et al. (2016). The automatic neuroscientist: a framework for optimizing experimental design with closed-loop real-time fMRI. Neuroimage 129, 320–334. doi: 10.1016/j.neuroimage.2016.01.032

PubMed Abstract | CrossRef Full Text | Google Scholar

Lorenz, R., Violante, I. R., Monti, R. P., Montana, G., Hampshire, A., and Leech, R. (2018). Dissociating frontoparietal brain networks with neuroadaptive Bayesian optimization. Nat. Commun. 9:1227. doi: 10.1038/s41467-018-03657-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Mack, M. L., Preston, A. R., and Love, B. C. (2013). Decoding the brain’s algorithm for categorization from its neural implementation. Curr. Biol. 23, 2023–2027. doi: 10.1016/j.cub.2013.08.035

PubMed Abstract | CrossRef Full Text | Google Scholar

Mantini, D., Perrucci, M. G., Del Gratta, C., Romani, G. L., and Corbetta, M. (2007). Electrophysiological signatures of resting state networks in the human brain. Proc. Natl. Acad. Sci. U.S.A. 104, 13170–13175. doi: 10.1073/pnas.0700668104

PubMed Abstract | CrossRef Full Text | Google Scholar

Marusak, H., Thomason, M., Peters, C., Zundel, C., Elrahal, F., and Rabinak, C. (2016). You say ‘prefrontal cortex’and I say ‘anterior cingulate’: meta-analysis of spatial overlap in amygdala-to-prefrontal connectivity and internalizing symptomology. Transl. Psychiatry 6:e944. doi: 10.1038/tp.2016.218

PubMed Abstract | CrossRef Full Text | Google Scholar

McDaniel, C., and Quinn, S. (2019). “Developing a graph convolution-based analysis pipeline for multi-modal neuroimage data: an application to Parkinson’s disease,” in Proceedings of the 18th Python in Science Conference (SciPy 2019), Austin, TX, 42–49.

Google Scholar

Mill, R. D., Ito, T., and Cole, M. W. (2017). From connectome to cognition: the search for mechanism in human functional brain networks. Neuroimage 160, 124–139. doi: 10.1016/j.neuroimage.2017.01.060

PubMed Abstract | CrossRef Full Text | Google Scholar

Muller, V. I., Cieslik, E. C., Laird, A. R., Fox, P. T., Radua, J., Mataix-Cols, D., et al. (2018). Ten simple rules for neuroimaging meta-analysis. Neurosci. Biobehav. Rev. 84, 151–161. doi: 10.1016/j.neubiorev.2017.11.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Nguyen, H., Kieu, L.-M., Wen, T., and Cai, C. (2018). Deep learning methods in transportation domain: a review. IET Intellig. Transport Syst. 12, 998–1004.

Google Scholar

Niepert, M., Ahmed, M., and Kutzkov, K. (2016). “Learning convolutional neural networks for graphs,” in Proceedings of the International Conference on Machine Learning: PMLR, (Piscataway, NJ: IEEE), 2014–2023.

Google Scholar

Nolan, H., Whelan, R., and Reilly, R. B. (2010). FASTER: fully automated statistical thresholding for EEG artifact rejection. J. Neurosci. Methods 192, 152–162. doi: 10.1016/j.jneumeth.2010.07.015

PubMed Abstract | CrossRef Full Text | Google Scholar

Norman, K. A., Polyn, S. M., Detre, G. J., and Haxby, J. V. (2006). Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends Cogn. Sci. 10, 424–430. doi: 10.1016/j.tics.2006.07.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Ou, J., Xie, L., Wang, P., Li, X., Zhu, D., Jiang, R., et al. (2013). “Modeling brain functional dynamics via hidden Markov models,” in Proceedings of the 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), (Piscataway, NJ: IEEE), 569–572.

Google Scholar

Papo, D., Buldú, J. M., Boccaletti, S., and Bullmore, E. T. (2014). Complex Network Theory and the Brain. London: The Royal Society.

Google Scholar

Pinto, M. M., Libonati, R., Trigo, R. M., Trigo, I. F., and Dacamara, C. C. (2020). A deep learning approach for mapping and dating burned areas using temporal sequences of satellite images. ISPRS J. Photogrammetry Remote Sens. 160, 260–274.

Google Scholar

Plis, S. M., Hjelm, D. R., Salakhutdinov, R., Allen, E. A., Bockholt, H. J., Long, J. D., et al. (2014). Deep learning for neuroimaging: a validation study. Front. Neurosci. 8:229. doi: 10.3389/fnins.2014.00229

PubMed Abstract | CrossRef Full Text | Google Scholar

Poldrack, R. A. (2010). Mapping mental function to brain structure: how can cognitive neuroimaging succeed? Perspect. Psychol. Sci. 5, 753–761. doi: 10.1177/1745691610388777

PubMed Abstract | CrossRef Full Text | Google Scholar

Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church, J. A., et al. (2011). Functional network organization of the human brain. Neuron 72, 665–678.

Google Scholar

Preti, M. G., Bolton, T. A., and Van De Ville, D. (2017). The dynamic functional connectome: state-of-the-art and perspectives. Neuroimage 160, 41–54. doi: 10.1016/j.neuroimage.2016.12.061

PubMed Abstract | CrossRef Full Text | Google Scholar

Rabiner, L. R., Wilpon, J. G., and Soong, F. K. (1989). High performance connected digit recognition using hidden Markov models. IEEE Trans. Acoust. Speech Signal Process. 37, 1214–1225.

Google Scholar

Rashid, B., Damaraju, E., Pearlson, G. D., and Calhoun, V. D. (2014). Dynamic connectivity states estimated from resting fMRI Identify differences among Schizophrenia, bipolar disorder, and healthy control subjects. Front. Hum. Neurosci. 8:897. doi: 10.3389/fnhum.2014.00897

PubMed Abstract | CrossRef Full Text | Google Scholar

Raviprakash, H., Watane, A., Jambawalikar, S., and Bagci, U. (2019). “Deep learning for functional brain connectivity: are we there yet?,” in Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics, eds L. Lu, X. Wang, G. Carneiro, and L. Yang (Cham: Springer), 347–365.

Google Scholar

Reinen, J. M., Chén, O. Y., Hutchison, R. M., Yeo, B. T., Anderson, K. M., Sabuncu, M. R., et al. (2018). The human cortex possesses a reconfigurable dynamic network architecture that is disrupted in psychosis. Nat. Commun. 9, 1–15. doi: 10.1038/s41467-018-03462-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Robbins, S., Evans, A. C., Collins, D. L., and Whitesides, S. (2004). Tuning and comparing spatial normalization methods. Med. Image Anal. 8, 311–323. doi: 10.1016/j.media.2004.06.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenberg, M. D., Finn, E. S., Scheinost, D., Papademetris, X., Shen, X., Constable, R. T., et al. (2016) A neuromarker of sustained attention from whole-brain functional connectivity. Nat Neurosci. 19, 165–71. doi: 10.1038/nn.4179

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenberg, M. D., Finn, E. S., Constable, R. T., and Chun, M. M. (2015). Predicting moment-to-moment attentional state. Neuroimage 114, 249–256. doi: 10.1016/j.neuroimage.2015.03.032

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenberg, M. D., Finn, E. S., Scheinost, D., Constable, R. T., and Chun, M. M. (2017). Characterizing attention with predictive network models. Trends Cogn. Sci. 21, 290–302. doi: 10.1016/j.tics.2017.01.011

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubinov, M., and Sporns, O. (2010). Complex network measures of brain connectivity: uses and interpretations. Neuroimage 52, 1059–1069. doi: 10.1016/j.neuroimage.2009.10.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Salazar, R., Dotson, N., Bressler, S., and Gray, C. (2012). Content-specific fronto-parietal synchronization during visual working memory. Science 338, 1097–1100. doi: 10.1126/science.1224000

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmidhuber, J. (2015). Deep learning in neural networks: an overview. Neural Netw. 61, 85–117. doi: 10.1016/j.neunet.2014.09.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Schwartz, F., Zhang, Y., Chang, H., Karraker, S., Kang, J. B., and Menon, V. (2021). Neural representational similarity between symbolic and non-symbolic quantities predicts arithmetic skills in childhood but not adolescence. Dev. Sci. 24:e13123. doi: 10.1111/desc.13123

PubMed Abstract | CrossRef Full Text | Google Scholar

Shakil, S., Lee, C. H., and Keilholz, S. D. (2016). Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. Neuroimage 133, 111–128. doi: 10.1016/j.neuroimage.2016.02.074

PubMed Abstract | CrossRef Full Text | Google Scholar

Shen, X., Finn, E. S., Scheinost, D., Rosenberg, M. D., Chun, M. M., Papademetris, X., et al. (2017). Using connectome-based predictive modeling to predict individual behavior from brain connectivity. Nat. Protoc. 12, 506–518. doi: 10.1038/nprot.2016.178

PubMed Abstract | CrossRef Full Text | Google Scholar

Shine, J. M., Bissett, P. G., Bell, P. T., Koyejo, O., Balsters, J. H., Gorgolewski, K. J., et al. (2016). The dynamics of functional brain networks: integrated network states during cognitive task performance. Neuron 92, 544–554. doi: 10.1016/j.neuron.2016.09.018

PubMed Abstract | CrossRef Full Text | Google Scholar

Shuman, D. I., Narang, S. K., Frossard, P., Ortega, A., and Vandergheynst, P. (2013). The emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Process. Magaz. 30, 83–98.

Google Scholar

Smith, S. M., Fox, P. T., Miller, K. L., Glahn, D. C., Fox, P. M., Mackay, C. E., et al. (2009). Correspondence of the brain’s functional architecture during activation and rest. Proc. Natl. Acad. Sci. U.S.A. 106, 13040–13045. doi: 10.1073/pnas.0905267106

PubMed Abstract | CrossRef Full Text | Google Scholar

Stanley, M. L., Simpson, S. L., Dagenbach, D., Lyday, R. G., Burdette, J. H., and Laurienti, P. J. (2015). Changes in brain network efficiency and working memory performance in aging. PLoS One 10:e0123950. doi: 10.1371/journal.pone.0123950

PubMed Abstract | CrossRef Full Text | Google Scholar

Stokes, M. G., Kusunoki, M., Sigala, N., Nili, H., Gaffan, D., and Duncan, J. (2013). Dynamic coding for cognitive control in prefrontal cortex. Neuron 78, 364–375. doi: 10.1016/j.neuron.2013.01.039

PubMed Abstract | CrossRef Full Text | Google Scholar

Su, P.-H., Gasic, M., Mrksic, N., Rojas-Barahona, L., Ultes, S., Vandyke, D., et al. (2016). Continuously learning neural dialogue management. arXiv [Preprint]. Available online at: https://arxiv.org/abs/1606.02689. (accessed December 2020).

Google Scholar

Taghia, J., Cai, W., Ryali, S., Kochalka, J., Nicholas, J., Chen, T., et al. (2018). Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nat. Commun. 9, 1–19. doi: 10.1038/s41467-018-04723-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Tavor, I., Jones, O. P., Mars, R. B., Smith, S., Behrens, T., and Jbabdi, S. (2016). Task-free MRI predicts individual differences in brain activity during task performance. Science 352, 216–220. doi: 10.1126/science.aad8127

PubMed Abstract | CrossRef Full Text | Google Scholar

Thompson, W. H., and Fransson, P. (2017). Spatial confluence of psychological and anatomical network constructs in the human brain revealed by a mass meta-analysis of fMRI activation. Sci. Rep. 7, 1–11. doi: 10.1038/srep44259

PubMed Abstract | CrossRef Full Text | Google Scholar

Tomescu, M. I., Rihs, T. A., Becker, R., Britz, J., Custo, A., Grouiller, F., et al. (2014). Deviant dynamics of EEG resting state pattern in 22q11. 2 deletion syndrome adolescents: a vulnerability marker of schizophrenia? Schizophr. Res. 157, 175–181. doi: 10.1016/j.schres.2014.05.036

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Den Heuvel, M. P., Stam, C. J., Kahn, R. S., and Pol, H. E. H. (2009). Efficiency of functional brain networks and intellectual performance. J. Neurosci. 29, 7619–7624. doi: 10.1523/JNEUROSCI.1443-09.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Der Burgh, H. K., Schmidt, R., Westeneng, H. J., De Reus, M. A., Van Den Berg, L. H., and Van Den Heuvel, M. P. (2017). Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis. Neuroimage Clin. 13, 361–369. doi: 10.1016/j.nicl.2016.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Vedhara, K., Miles, J., Bennett, P., Plummer, S., Tallon, D., Brooks, E., et al. (2003). An investigation into the relationship between salivary cortisol, stress, anxiety and depression. Biol. Psychol. 62, 89–96. doi: 10.1016/s0301-0511(02)00128-x

CrossRef Full Text | Google Scholar

Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. (2017). Graph attention networks. arXiv [Preprint]. Available online at: https://arxiv.org/abs/1710.10903. (accessed December 2020).

Google Scholar

Vidaurre, D., Abeysuriya, R., Becker, R., Quinn, A. J., Alfaro-Almagro, F., Smith, S. M., et al. (2018). Discovering dynamic brain networks from big data in rest and task. Neuroimage 180, 646–656. doi: 10.1016/j.neuroimage.2017.06.077

PubMed Abstract | CrossRef Full Text | Google Scholar

Vidaurre, D., Quinn, A. J., Baker, A. P., Dupret, D., Tejero-Cantero, A., and Woolrich, M. W. (2016). Spectrally resolved fast transient brain states in electrophysiological data. Neuroimage 126, 81–95. doi: 10.1016/j.neuroimage.2015.11.047

PubMed Abstract | CrossRef Full Text | Google Scholar

Vidaurre, D., Smith, S. M., and Woolrich, M. W. (2017). Brain network dynamics are hierarchically organized in time. Proc. Natl. Acad. Sci. U.S.A. 114, 12827–12832. doi: 10.1073/pnas.1705120114

PubMed Abstract | CrossRef Full Text | Google Scholar

Wager, T. D., Atlas, L. Y., Botvinick, M. M., Chang, L. J., Coghill, R. C., Davis, K. D., et al. (2016). Pain in the ACC? Proc. Natl. Acad. Sci. U.S.A. 113, E2474–E2475.

Google Scholar

Wang, Z., Li, Y., Childress, A. R., and Detre, J. A. (2014). Brain entropy mapping using fMRI. PLoS One 9:e89948. doi: 10.1371/journal.pone.0089948

PubMed Abstract | CrossRef Full Text | Google Scholar

Wei, J., Chen, T., Li, C., Liu, G., Qiu, J., and Wei, D. (2018). Eyes-open and eyes-closed resting states with opposite brain activity in sensorimotor and occipital regions: multidimensional evidences from machine learning perspective. Front. Hum. Neurosci. 12:422. doi: 10.3389/fnhum.2018.00422

PubMed Abstract | CrossRef Full Text | Google Scholar

Wickens, T. D. (2001). Elementary Signal Detection Theory. Oxford: Oxford university press.

Google Scholar

Wu, Z., Shen, C., and Van Den Hengel, A. (2019). Wider or deeper: revisiting the resnet model for visual recognition. Pattern Recogn. 90, 119–133.

Google Scholar

Xu, F., Liu, M., Kim, S. Y., Ge, X., Zhang, Z., Tang, Y., et al. (2021). Morphological development trajectory and structural covariance network of the human fetal cortical plate during the early second trimester. Cereb. Cortex 31, 4794–4807. doi: 10.1093/cercor/bhab123

PubMed Abstract | CrossRef Full Text | Google Scholar

Xue, G., Dong, Q., Chen, C., Lu, Z., Mumford, J. A., and Poldrack, R. A. (2010). Greater neural pattern similarity across repetitions is associated with better memory. Science 330, 97–101.

Google Scholar

Yaesoubi, M., Allen, E. A., Miller, R. L., and Calhoun, V. D. (2015). Dynamic coherence analysis of resting fMRI data to jointly capture state-based phase, frequency, and time-domain information. Neuroimage 120, 133–142. doi: 10.1016/j.neuroimage.2015.07.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Yamins, D. L., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., and Dicarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Natl. Acad. Sci. U.S.A. 111, 8619–8624. doi: 10.1073/pnas.1403112111

PubMed Abstract | CrossRef Full Text | Google Scholar

Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C., and Wager, T. D. (2011). Large-scale automated synthesis of human functional neuroimaging data. Nat. Methods 8, 665–670. doi: 10.1038/nmeth.1635

PubMed Abstract | CrossRef Full Text | Google Scholar

Yeo, B. T., Krienen, F. M., Sepulcre, J., Sabuncu, M. R., Lashkari, D., Hollinshead, M., et al. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 106, 1125–1165.

Google Scholar

Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L., and Breakspear, M. (2014). Time-resolved resting-state brain networks. Proc. Natl. Acad. Sci. U.S.A. 111, 10341–10346. doi: 10.1073/pnas.1400181111

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhao, T., and Mukhopadhyay, P. (2018). “A fault detection workflow using deep learning and image processing,” in Proceedings of the 2018 SEG International Exposition and Annual Meeting: OnePetro, (Piscataway, NJ: IEEE).

Google Scholar

Keywords: neuroscience, graph theory, data driven, machine learning, neural network

Citation: Liu M, Amey RC, Backer RA, Simon JP and Forbes CE (2022) Behavioral Studies Using Large-Scale Brain Networks – Methods and Validations. Front. Hum. Neurosci. 16:875201. doi: 10.3389/fnhum.2022.875201

Received: 13 February 2022; Accepted: 17 May 2022;
Published: 16 June 2022.

Edited by:

Mingzhou Ding, University of Florida, United States

Reviewed by:

Jelmer Pieter Borst, University of Groningen, Netherlands
Zonglei Zhen, Beijing Normal University, China

Copyright © 2022 Liu, Amey, Backer, Simon and Forbes. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rachel C. Amey, ramey.ameyc@gmail.com; Mengting Liu, liumt55@mail.sysu.edu.cn

Download