<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Syst. Neurosci.</journal-id>
<journal-title>Frontiers in Systems Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Syst. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5137</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnsys.2022.904770</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Detection of autism spectrum disorder using graph representation learning algorithms and deep neural network, based on fMRI signals</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Yousefian</surname> <given-names>Ali</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/2154919/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Shayegh</surname> <given-names>Farzaneh</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/657529/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Maleki</surname> <given-names>Zeinab</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/2169488/overview"/>
</contrib>
</contrib-group>
<aff><institution>Department of Electrical and Computer Engineering, Isfahan University of Technology</institution>, <addr-line>Isfahan</addr-line>, <country>Iran</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Javier Ram&#x00ED;rez, University of Granada, Spain</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Shijie Zhao, Northwestern Polytechnical University, China; Mohammed Isam Al-Hiyali, Universiti Teknologi PETRONAS, Malaysia</p></fn>
<corresp id="c001">&#x002A;Correspondence: Farzaneh Shayegh, <email>f.shayegh@cc.iut.ac.ir</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>02</day>
<month>02</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>16</volume>
<elocation-id>904770</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>03</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>28</day>
<month>12</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Yousefian, Shayegh and Maleki.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Yousefian, Shayegh and Maleki</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<sec>
<title>Introduction</title>
<p>Can we apply graph representation learning algorithms to identify autism spectrum disorder (ASD) patients within a large brain imaging dataset? ASD is mainly identified by brain functional connectivity patterns. Attempts to unveil the common neural patterns emerged in ASD are the essence of ASD classification. We claim that graph representation learning methods can appropriately extract the connectivity patterns of the brain, in such a way that the method can be generalized to every recording condition, and phenotypical information of subjects. These methods can capture the whole structure of the brain, both local and global properties.</p>
</sec>
<sec>
<title>Methods</title>
<p>The investigation is done for the worldwide brain imaging multi-site database known as ABIDE I and II (Autism Brain Imaging Data Exchange). Among different graph representation techniques, we used AWE, Node2vec, Struct2vec, multi node2vec, and Graph2Img. The best approach was Graph2Img, in which after extracting the feature vectors representative of the brain nodes, the PCA algorithm is applied to the matrix of feature vectors. The classifier adapted to the features embedded in graphs is an LeNet deep neural network.</p>
</sec>
<sec>
<title>Results and discussion</title>
<p>Although we could not outperform the previous accuracy of 10-fold cross-validation in the identification of ASD versus control patients in this dataset, for leave-one-site-out cross-validation, we could obtain better results (our accuracy: 80%). The result is that graph embedding methods can prepare the connectivity matrix more suitable for applying to a deep network.</p>
</sec>
</abstract>
<kwd-group>
<kwd>autism spectrum disorder</kwd>
<kwd>connectivity</kwd>
<kwd>graph representation learning methods</kwd>
<kwd>AWE</kwd>
<kwd>Graph2Img</kwd>
<kwd>deep neural network (DNN)</kwd>
</kwd-group>
<counts>
<fig-count count="6"/>
<table-count count="5"/>
<equation-count count="4"/>
<ref-count count="71"/>
<page-count count="15"/>
<word-count count="10792"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<p>Autism spectrum disorder (ASD) is a set of clinical presentations, emerging due to neurodevelopmental disorder. ASD symptoms are related to social communication, imagination, and behavior. Accurate and timely diagnoses of ASD significantly improve the quality of life of individuals with ASD (<xref ref-type="bibr" rid="B12">Elder et al., 2017</xref>). Yet, there is no clear etiology to diagnose ASD.</p>
<p>To date, ASD diagnosis is done based on the behavioral characteristics of children, observed by parents and teachers at home or school (<xref ref-type="bibr" rid="B41">Nickel and Huang-Storms, 2017</xref>; <xref ref-type="bibr" rid="B2">Almuqhim and Saeed, 2021</xref>). Since autism is related to abnormal development of the brain, assessing brain function (e.g., based on fMRI) is at the top of automatic diagnosis and classification research. Resting-state functional MRI has a suitable spatial resolution to show the interaction of brain regions during a special behavior. In other words, a region&#x2019;s function is tightly dependent on its interactions. Various differential observations consider the properties of the brain network in healthy and ASD subjects. However, statistical analysis led the researchers to a disorder whose mechanisms vary among patients. In other words, there is no unique fact to announce it as a reliable biomarker of ASD (<xref ref-type="bibr" rid="B16">Frye et al., 2019</xref>).</p>
<p>By considering brain regions and their connections as a network, the detection of ASD alternatively could be a network classification task, in which machine learning techniques could help. To efficiently use information hidden in the resting-state fMRI, the connectivity measures obtained from resting-state fMRI are useful to understand the large-scale functional difference between healthy and abnormal brains.</p>
<p>After that, a suitable classifier should be used. A vast number of mental disorder diagnosis studies used traditional classifiers, such as support vector machine (SVM), LASSO, and Bayesian classifier. But deep learning methods showed a major preference in the case of connectivity matrix because it is a high-dimensional feature of brain activity. High-dimensional features increase the number of hyperparameters of a machine learning algorithm. In such a way, just deep neural networks can learn complex structures of high-dimensional data. From this point of view, applying fully connected deep neural networks and convolutional networks on fMRI volumes and raw connectome data appears to be successful (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>; <xref ref-type="bibr" rid="B11">El Gazzar et al., 2019</xref>; <xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>).</p>
<p>In <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>, a deep learning algorithm using the full connectivity matrix is applied to classify ASD and controls using ABIDE data. They showed anterior&#x2013;posterior underconnectivity in the autistic brain and surpassed the state-of-the-art classification of autism by achieving 70% accuracy. Similarly, a convolutional neural network (CNN) was used to effectively diagnose Alzheimer&#x2019;s disease (AD) (<xref ref-type="bibr" rid="B54">Sarraf et al., 2016</xref>) and mild cognitive impairment (MCI) (<xref ref-type="bibr" rid="B40">Meszl&#x00E9;nyi et al., 2017</xref>). In another study, CNN was used to extract features from fMRI data, and SVM was used for classification (<xref ref-type="bibr" rid="B42">Nie et al., 2016</xref>). A deep autoencoder was used to classify the fMRI data of MCI (<xref ref-type="bibr" rid="B59">Suk et al., 2016</xref>). Furthermore, different hidden layers between the encoder and the decoder (<xref ref-type="bibr" rid="B45">Patel et al., 2016</xref>) were added to afford different tasks, like denoising (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>), or generating sparse features (<xref ref-type="bibr" rid="B19">Guo et al., 2017</xref>). Other networks such as radial basis function network (RBFN) (<xref ref-type="bibr" rid="B64">Vigneshwaran et al., 2015</xref>), restricted Boltzmann machine (RBM) (<xref ref-type="bibr" rid="B23">Huang et al., 2016</xref>), and deep Boltzmann machine (DBM) can be used to extract features from fMRI data because they can combine the information of different voxels of the region of interest (<xref ref-type="bibr" rid="B71">Zafar et al., 2017</xref>). To take advantage of the topological information implied in the connectivity graph, a restricted path-based depth-first search (RP-DFS) algorithm was applied to some remarkable autistic functional connections (<xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>). Finally, a three-layer deep belief network (DBN) model with the automatic hyperparameter-tuning technique was applied for classification. To date, this work achieved the most accurate ASD/healthy classification result for ABIDE dataset (76.4% accuracy).</p>
<p>However, to get more reliable results, dynamic and/or multimodal features were proposed. As an example, CNN with the wavelet-based spectrogram as input (instead of the static connectivity matrices), taking the dynamic of brain activities into account, reached a specific improvement in the classification accuracy (<xref ref-type="bibr" rid="B1">Al-Hiyali et al., 2021</xref>). However, just 144 subjects of the ABIDE database were used in their evaluation. Furthermore, a novel adversarial learning-based node&#x2013;edge graph attention network (AL-NEGAT) is used to combine fMRI and structural MRI information (<xref ref-type="bibr" rid="B6">Chen et al., 2022</xref>) and obtained 74.7% accuracy. But this method could not reach a good result in leave-one-site-out validation (69.42%).</p>
<p>On the other hand, the benefit of DNN is mainly due to a large number of training examples (<xref ref-type="bibr" rid="B33">Kuang et al., 2014</xref>; <xref ref-type="bibr" rid="B29">Kim et al., 2016</xref>; <xref ref-type="bibr" rid="B19">Guo et al., 2017</xref>; <xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>). Developing deep learning approaches to work with functional connectivity (FC) features using small or at best modest sample sizes of neurological data (<xref ref-type="bibr" rid="B9">di Martino et al., 2014</xref>; <xref ref-type="bibr" rid="B33">Kuang et al., 2014</xref>; <xref ref-type="bibr" rid="B29">Kim et al., 2016</xref>; <xref ref-type="bibr" rid="B19">Guo et al., 2017</xref>; <xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>) is debatable from the reproducibility and generalizability point of view. One solution is the deep transfer learning neural network (DTL-NN) approach that could achieve improved performance in classification for neurological conditions (70.4% for ASD detection), especially where there are no large neuroimaging datasets available (<xref ref-type="bibr" rid="B37">Li et al., 2018</xref>). Other solutions are the Synthetic Minority Oversampling Technique (SMOTE) to perform data augmentation to generate artificial data and avoid overfitting (<xref ref-type="bibr" rid="B13">Eslami and Saeed, 2019</xref>) and sparse autoencoder (SAENet) that was used for classifying patients with ASD from typical control subjects using fMRI data (70.8% accuracy and 79.1% specificity) for the whole dataset as compared to other methods (<xref ref-type="bibr" rid="B2">Almuqhim and Saeed, 2021</xref>). Another approach is to develop a machine learning approach with a robust training methodology (<xref ref-type="bibr" rid="B37">Li et al., 2018</xref>). Machine learning algorithms able to extract replicable, and robust neural patterns from brain imaging data of patients with ASD, reach suitable classification results (<xref ref-type="bibr" rid="B48">Pereira et al., 2009</xref>).</p>
<p>Another solution in studies with limited sample sizes is the reduction of the size of features indicating useful connectivity properties by network analysis methods. The ease of representing brain connectivity information according to graph theory makes them very valuable tools in this area. Machine learning on graphs finds its importance here: finding a way to represent or encode graph structure is the subject of this task. Nowadays, in order to model information underlying the graph structure, there are new ways of representing and analyzing graphs, which afford the complexity of working with big graphs. Referring to these representation algorithms as embedding, applying these approaches to brain networks is named connectome embeddings (CEs). These embedding algorithms involve converting graphs into vectors. Network embedding techniques can be divided into three buckets: (1) based on engineered graph features, (2) obtained by training on graph data, and (3) obtained by a layer of a deep network. The main drawback of the former is that structural homologies or higher-order relations of the connectivity matrix could not be captured (<xref ref-type="bibr" rid="B51">Rosenthal et al., 2018</xref>). Furthermore, these features are not flexible; <italic>i.e.</italic>, they cannot adapt during the learning procedure. In summary, many of these local and global features cannot capture the topological shape of the graph, unless the morphology of the cortex would be considered (<xref ref-type="bibr" rid="B21">He et al., 2022</xref>).</p>
<p>In the second bucket, referred to as shallow embedding, network embedding vectors are learned by optimizing different types of objective functions defined as a mapping to reflect geometric information of graph data. This optimum embedded space is the final feature vector. These algorithms involve learning approaches that map nodes to an embedding space. Anonymous walk Embedding (AWE), Node2vec, Struct2vec, DeepWalk, multi-node2vec, and Graph2Img (<xref ref-type="bibr" rid="B18">Grover and Leskovec, 2016</xref>; <xref ref-type="bibr" rid="B50">Ribeiro et al., 2017</xref>) are some well-known algorithms of this bucket. These methods represent higher-order features of the connections of a graph, helpful to develop an input convenient for training a CNN. As an example, in the Graph2Img method, the embedded space of the brain network is transformed into an image. The advantage of this method is the capability of dimensionality reduction of this image by an algorithm like PCA and still has an image at hand (<xref ref-type="bibr" rid="B39">Meng and Xiang, 2018</xref>). Multi-node2vec was applied on fMRI scans over a group of 74 healthy individuals. Multi-node2vec identifies nodal characteristics that are closely associated with the functional organization of the brain (<xref ref-type="bibr" rid="B67">Wilson et al., 2018</xref>).</p>
<p>In the third bucket, referred to as deep embedding, CE and deep learning algorithms are combined to form a single deep network. This combinatory network can exploit the connectome topology. In this category, a Hypergraph U-Net (HUNet), Graph U-Net (GUNet) (<xref ref-type="bibr" rid="B17">Gao and Ji, 2019</xref>), and hypergraph neural network (HGNN) (<xref ref-type="bibr" rid="B14">Feng et al., 2019</xref>) are proposed in which low-dimensional embeddings of data samples are learned from the high-dimensional connectivity matrix. Indeed, these networks emerged as a subset of deep graph neural networks (GNNs) (<xref ref-type="bibr" rid="B65">Wang et al., 2016</xref>; <xref ref-type="bibr" rid="B31">Kipf and Welling, 2017</xref>; <xref ref-type="bibr" rid="B17">Gao and Ji, 2019</xref>) are able to model the deeply nonlinear relationship node connectomic features (<xref ref-type="bibr" rid="B32">Ktena et al., 2017</xref>; <xref ref-type="bibr" rid="B3">Banka and Rekik, 2019</xref>; <xref ref-type="bibr" rid="B4">Bessadok et al., 2019</xref>).</p>
<p>In summary, the accuracy of ASD classifiers using different algorithms ranges from 55 to 76.4% (<xref ref-type="bibr" rid="B44">Parisot et al., 2018</xref>; <xref ref-type="bibr" rid="B68">Xing et al., 2019</xref>; <xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>; <xref ref-type="bibr" rid="B27">Kazeminejad and Sotero, 2020</xref>; <xref ref-type="bibr" rid="B55">Sharif and Khan, 2021</xref>; <xref ref-type="bibr" rid="B6">Chen et al., 2022</xref>). The main point is that the reported good performances in ASD classification of ABIDE dataset were about considering individual sites for most traditional and deep machine learning algorithms (<xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>; <xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>). But our main concern is that after intermingling all the sites, or leave-one-site-out cross-validation algorithm, accuracy (the percent of correctly classified subjects), and the area under ROC is diminished. In other words, there is no algorithm appropriate for clinical usage. Thus, still, further experiments are required to be conducted with patients with different phenotypical information to ensure the clinical value of these methods (<xref ref-type="bibr" rid="B37">Li et al., 2018</xref>).</p>
<p>Our main goal in this paper is the demonstration of the role of the second bucket (CE method) in representing the structure with which brain regions are connected to each other and assessing its effect on ASD classification. In fact, we claim that representation-based features can solve the problem of high-dimensional input of the deep network. Based on the ABIDE I and ABIDE II public datasets, recorded at some different sites, we want to investigate whether CE can surpass previous research studies or not. Accordingly, by using CNN classifiers, we claim that there is great potential in combining graph representation methods, with deep learning techniques for fMRI-based classification, to increase the generalization of the algorithm from one site to others.</p>
<p>The structure of the paper is as follows: after describing the network embedding techniques in Section &#x201C;2. Materials and methods,&#x201D; suitable embedding-based features are illustrated. In Section &#x201C;3. Implementation and results,&#x201D; the classification technique using the deep network is declared. Afterward, ABIDE database, its preprocessing methods, and the embedded features extracted from them are introduced. These features are applied to deep network to detect ASD subjects. Some evaluation measures like the F-score and the accuracy of this classifier are reported in the Results section and are compared to other literature working on ABIDE dataset.</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>2. Materials and methods</title>
<sec id="S2.SS1">
<title>2.1. Network embedding methods</title>
<p>The concept of network embedding can be described as follows: suppose there is a graph <italic>G</italic> = (<italic>V</italic>,<italic>E</italic>,<italic>A</italic>) with <italic>V</italic> as the node set, <italic>E</italic> as the undirected and weighted edge set, and <italic>A</italic> as the adjacency matrix. We are going to find the optimum function <italic>z</italic> = <italic>f</italic>(<italic>v</italic>) &#x2208; <italic>R<sup>d</sup></italic> that maps each node or subgraph to a d-dimensional vector disclosing the structure of the graph. These vectors should be representative of the graph and can be used as the feature vectors uncovering the similarities of the graph for machine learning algorithms. At this level, each node corresponds to a d-dimensional embedded vector involving its connections with all other nodes (<xref ref-type="bibr" rid="B20">Hamilton William et al., 2017</xref>).</p>
<p>Indeed, these low-dimensional embedded vectors can summarize either position of nodes in the graph, or the structure of their neighborhood, and user-specified graph statistics (<xref ref-type="bibr" rid="B20">Hamilton William et al., 2017</xref>). Most shallow embedding mapping techniques are done based on a lookup table, just like what occurred in classic matrix factorization for dimensionality reduction (<xref ref-type="bibr" rid="B20">Hamilton William et al., 2017</xref>). For another part of shallow embedding techniques, learning the embedded vector for each node is the process of training an encoder&#x2013;decoder system, defined as an optimization method. The decoder maps the similarity of two nodes into a real-valued similarity measure. Different techniques able to afford this job (like DeepWalk, Node2vec, AWE, TSNE, GraRep, and others) are based on a stream of randomly generated walks. The resultant vectors can describe the similarities and subgraph membership with relatively few dimensions. These learned embedded vectors can be used as features of the graph.</p>
<p>The core of this relevant optimization problem is to find a mapping such that nearby nodes in short random walks have similar embedding vectors. The detail of random walk, DeepWalk, and Node2vec embedding methods is explained in <xref ref-type="supplementary-material" rid="DS1">Supplementary Appendix A</xref>.</p>
<sec id="S2.SS1.SSS1">
<title>2.1.1. Struc2vec</title>
<p>Node2vec and DeepWalk approaches lead to a unique embedding vector for every individual node but have some drawbacks, including working as a lookup table, its computational cost, failure to leverage attribute information of nodes involving node&#x2019;s position and role, weakness in predicting information of unseen nodes. To alleviate the abovementioned drawbacks, two alternatives have arisen: (1) some embedding approaches that enable capturing the structural roles of nodes have been proposed (<xref ref-type="bibr" rid="B50">Ribeiro et al., 2017</xref>; <xref ref-type="bibr" rid="B10">Donnat et al., 2018</xref>), and (2) network embedding in a feature-based manner has been proposed.</p>
<p>As an example of the first alternative, Ribeiro&#x2019;s technique referred to as Struc2Vec generates some new graphs <italic>G</italic><sub>0</sub>, &#x2026; , <italic>G</italic><sub><italic>K</italic></sub>, each to capture one kind of <italic>k</italic>-hop neighborhood structural similarity, from the original graph <italic>G</italic>. The algorithm is as follows (<xref ref-type="bibr" rid="B50">Ribeiro et al., 2017</xref>):</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>For each node <italic>v<sub>i</sub></italic>, order the sequence of degrees of nodes exactly with the distance of <italic>k</italic>-hops from it: <italic>R</italic><sub><italic>k</italic></sub>(<italic>v</italic><sub><italic>i</italic></sub>).</p>
</list-item>
<list-item>
<label>2.</label>
<p>Start from a weighted graph <italic>G<sub>0</sub></italic> whose edges have zero weights <italic>w</italic><sub>0</sub>(<italic>v</italic><sub><italic>i</italic></sub>,<italic>v</italic><sub><italic>j</italic></sub>)=0.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Build a sequence of weighted graphs whose edges vary adaptively by the equation:</p>
</list-item>
</list>
<disp-formula id="S2.Ex1">
<mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>-</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>d</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>R</mml:mi>
<mml:mi>k</mml:mi>
</mml:msub>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:msub>
<mml:mi>v</mml:mi>
<mml:mi>j</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<list list-type="simple">
<list-item>
<p>where <italic>d</italic> (<italic>R</italic><sub><italic>k</italic></sub> (<italic>v</italic><sub><italic>i</italic></sub>) , <italic>R</italic><sub><italic>k</italic></sub> (<italic>v</italic><sub><italic>j</italic></sub>)) is the distance between sequence <italic>R</italic><sub><italic>k</italic></sub> (<italic>v</italic><sub><italic>i</italic></sub>) and <italic>R</italic><sub><italic>k</italic></sub> (<italic>v</italic><sub><italic>j</italic></sub>) and could be defined with different measures.</p>
</list-item>
<list-item>
<label>4.</label>
<p>Run random walk on new graphs <italic>G</italic><sub>0</sub>,&#x2026;,<italic>G</italic><sub><italic>K</italic></sub> to implement Node2vec on them, and learn latent from them, using an algorithm like SkipGram.</p>
</list-item>
</list>
<p>In the second alternative, namely, feature-based methods, two algorithms referred to as Graph2Img and AWE are considered.</p>
</sec>
<sec id="S2.SS1.SSS2">
<title>2.1.2. Graph2Img</title>
<p>The Graph2Img algorithm, at first, transfers the original network into feature vectors and then uses clustering methods to group nodes. In other words, after embedding the graph nodes into a <italic>d</italic>-dimensional space, representations of nodes are gathered in a matrix of dimension <italic>N</italic>&#x00D7;<italic>d</italic>, where <italic>N</italic> = |<italic>V</italic>|, <italic>i.e.</italic>, the number of nodes in the graph. Next, we can decide whether all features are important or not and determine their priority. In fact, the principal component analysis (PCA) method is used to reduce the<italic>d</italic>-dimension vector to <italic>dPCA</italic>-dimension. Then, we can use just the most important dimension, the second important, third, and fourth dimensions. Taking into account just four first components of LPCA, two matrices <italic>M</italic><sub><italic>12</italic></sub> and <italic>M</italic><sub><italic>34</italic></sub> are constructed (see <xref ref-type="supplementary-material" rid="DS1">Supplementary Appendix C</xref>), which seems to be enough to analyze the brain network.</p>
<p>As shown in <xref ref-type="fig" rid="F1">Figure 1</xref> (<xref ref-type="bibr" rid="B39">Meng and Xiang, 2018</xref>), these two matrices, behaving like images, can be applied as different channels of DCNN. The algorithm pseudo-code is shown in <xref ref-type="supplementary-material" rid="DS1">Supplementary Algorithm A-3</xref> (<xref ref-type="bibr" rid="B39">Meng and Xiang, 2018</xref>).</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Block diagram of the Graph2Img feature-based embedding algorithm (<xref ref-type="bibr" rid="B39">Meng and Xiang, 2018</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g001.tif"/>
</fig>
</sec>
<sec id="S2.SS1.SSS3">
<title>2.1.3. Anonymous walk embedding (AWE)</title>
<p>As another feature-based network embedding method, the Anonymous walk Embedding (AWE) algorithm used distribution of anonymous walks. Anonymous walks are the set of walks starting from an initial node <italic>u</italic>, by length <italic>l</italic> passing from random nodes, and termination at node <italic>v</italic>. There are a set of &#x03B7; such random walks <inline-formula><mml:math id="INEQ13"><mml:mrow><mml:mpadded width="+3.3pt"><mml:msubsup><mml:mi>A</mml:mi><mml:mi>l</mml:mi><mml:mi>u</mml:mi></mml:msubsup></mml:mpadded><mml:mo rspace="5.8pt">=</mml:mo><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>a</mml:mi><mml:mn>1</mml:mn><mml:mi>u</mml:mi></mml:msubsup><mml:mo rspace="7.5pt">,</mml:mo><mml:msubsup><mml:mi>a</mml:mi><mml:mn>2</mml:mn><mml:mi>u</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:mi mathvariant="normal">&#x2026;</mml:mi><mml:mo rspace="7.5pt">,</mml:mo><mml:msubsup><mml:mi>a</mml:mi><mml:mi mathvariant="normal">&#x03B7;</mml:mi><mml:mi>u</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>. Thus, the number of all possible random walks with length <italic>l</italic> exponentially grows with <italic>l</italic>. These anonymous walks capture structural information of nodes because labels of the nodes constituent of a random walk are omitted for them. In fact, corresponded to the random walk: <italic>w</italic> = (<italic>v</italic><sub>1</sub>,<italic>v</italic><sub>2</sub>,&#x2026;,<italic>v</italic><sub><italic>k</italic></sub>), we can define an anonymous walk involving a sequence of integers <italic>a</italic> = (<italic>f</italic>(<italic>v</italic><sub>1</sub>),<italic>f</italic>(<italic>v</italic><sub>2</sub>),&#x2026;,<italic>f</italic>(<italic>v</italic><sub><italic>k</italic></sub>)) where <italic>f</italic>(<italic>v</italic>) is the minimum place of <italic>v</italic> in the <italic>w</italic> random walk (<xref ref-type="bibr" rid="B25">Ivanov and Burnaev, 2018</xref>). However, due to the huge number of anonymous walks of a large graph, an efficient sampling approach is required to approximate this distribution (<xref ref-type="bibr" rid="B25">Ivanov and Burnaev, 2018</xref>). Defining the objective function of similar nodes on local neighborhoods of anonymous walks, improve the structural consideration of the embedding method.</p>
<p>These four embedding algorithms, Node2Vec, DeepWalk, AWE, and Graph2Img, extract the feature vectors of each node, describing the characteristics and structure of the graph. Thus, the next step of our research is the classification of these feature vectors obtained for healthy and ASD subjects.</p>
</sec>
</sec>
<sec id="S2.SS2">
<title>2.2. Classification</title>
<p>Graph classification is a task to predict whether a whole graph belongs to any class of <italic>C</italic> predefined classes. In other words, the task is to train a classifier based on <italic>N</italic> graphs {<italic>G</italic><sub><italic>i</italic></sub>},<italic>i</italic> = 1:<italic>N</italic> and their corresponding labels {<italic>L</italic><sub><italic>i</italic></sub>},<italic>i</italic> = 1:<italic>N</italic>, able to classify every new graph <italic>G</italic>&#x2192;<italic>L</italic>. Graph classification problem can be done using two typical approaches: (1) classification using extended CNNs to be appropriate for the raw graphs (<xref ref-type="bibr" rid="B43">Niepert et al., 2016</xref>) and (2) graph kernel methods (<xref ref-type="bibr" rid="B57">Shervashidze et al., 2011</xref>), in which graph embeddings <italic>f</italic>(<italic>G</italic><sub>1</sub>) are used in conjunction with kernel methods (<italic>K</italic>(<italic>f</italic>(<italic>G</italic><sub>1</sub>),<italic>f</italic>(<italic>G</italic><sub>2</sub>))) to perform classification of new graphs, where <italic>K</italic>:(<italic>x</italic>,<italic>y</italic>)&#x2192;<italic>R<sup>n</sup></italic> is a kernel function, quantifying the distance of graphs.</p>
<p>As mentioned earlier, the aim of this paper is a kernelized classification of healthy and autistic patients based on functional connectivity matrices. The features extracted from these matrices (<italic>f</italic>(<italic>G</italic><sub>1</sub>)) are the embedded vectors obtained by using Node2vec, Struct2vec, AWE, and Graph2Img algorithms. To do the classification job, we used the DNN classifier. The reason underlying this selection is the size of the resultant feature vectors, whose classification requires many parameters to be trained. As well, to validate the performance of our classification task, cross-validation is applied.</p>
<p>Three types of deep networks have been considered in this study: LeNet, ResNet, and VGG16. However, finally, we have used LeNet, because of its best performance for our problem. Thus, we described it here.</p>
<sec id="S2.SS2.SSS1">
<title>2.2.1. LeNet</title>
<p><italic>LeNet</italic>, one of the first published CNNs in computer vision tasks, was introduced by (and named for) Yann LeCun. In <xref ref-type="bibr" rid="B35">LeCun et al. (1998)</xref> published the first study in which he could train CNNs <italic>via</italic> backpropagation. Then, this network was applied in AT&#x0026;T Bell Labs, for the purpose of recognizing handwritten digits in images (<xref ref-type="bibr" rid="B35">LeCun et al., 1998</xref>). LeNet achieved outstanding results comparable with that of support vector machines and thus became a dominant approach in supervised learning.</p>
<p>LeNet (LeNet-5) consists of two parts (<xref ref-type="bibr" rid="B66">Wang and Gong, 2019</xref>): <italic>(i)</italic> a convolutional encoder and <italic>(ii)</italic> a dense block. The former consists of two convolutional blocks, and the latter consists of three fully connected layers. The architecture is summarized in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Data flow in LeNet. The input is an image, and the output is a probability over different possible outcomes (<xref ref-type="bibr" rid="B38">Loey et al., 2016</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g002.tif"/>
</fig>
<p>Each convolutional block includes a convolutional layer, a sigmoid activation function, and a subsequent average pooling operation. In 1990, ReLUs and max pooling were discovered to have suitable performance. However, in LeNet, each convolutional layer maps any 5 &#x00D7; 5 part of the input to a scalar using a kernel and a sigmoid activation function. There are 6 convolutional layers, in such a way that the result is a 6@28&#x002A;28 tensor. In fact, by these convolutional layers, spatial features of input are mapped to a number of two-dimensional feature maps, namely, channels. Then, a pooling layer samples the channels by a factor of 2 and leads to a 6@14&#x002A;14 array. Then, there is another convolutional layer. Again, this is a convolutional layer with a 5&#x002A;5-dimensional filter. The first convolutional layer had 6 output channels, while the second layer has 16 outputs of size 10&#x002A;10. The output of the convolutional block must be flattened before being passed to the dense block. This output is a 16@5&#x002A;5 vector, created by a pooling layer.</p>
<p>LeNet&#x2019;s dense block has three fully connected layers, with 120, 84, and 10 outputs, respectively. Because we are still performing classification, the 10-dimensional output layer corresponds to the number of possible output classes. Implementing LeNet models with modern deep learning frameworks is remarkably simple.</p>
</sec>
</sec>
<sec id="S2.SS3">
<title>2.3. ABIDE dataset</title>
<p>The rs-fMRI data of ASD and healthy subjects are downloaded from a large multisite data repository Autism Brain Imaging Data Exchange (ABIDE)<sup><xref ref-type="fn" rid="footnote1">1</xref></sup>. The Autism Brain Imaging Data Exchange I (ABIDE I) is a multisite platform gathered from 17 international laboratories, which shared some collected resting-state functional magnetic resonance imaging (rs-fMRI), anatomical and phenotypic datasets. This dataset includes 1112 patients, from 539 individuals with ASD and 573 from typical controls (age 7&#x2013;64 years, median 14.7 years across groups). Till now, these data are used in many research studies. The publications have shown its utility for capturing the whole brain and regional properties of the brain connectome in ASD. All data have been anonymized.</p>
<p>Accordingly, ABIDE II was established to further promote discovery science on the brain connectome in ASD. To date, ABIDE II involves 19 sites, overall donating 1114 datasets from 521 individuals with ASD and 593 controls (age range: 5&#x2013;64 years). All datasets are anonymous, with no protected health information included.</p>
<p>There is no ASD/healthy label for some individuals present in ABIDE database. After removing these cases, 871 individuals of ABIDE I and 910 individuals of ABIDE II would be remaining, for investigation in this study (<xref ref-type="bibr" rid="B69">Yang et al., 2019</xref>).</p>
</sec>
</sec>
<sec id="S3">
<title>3. Implementation and results</title>
<p>The proposed method includes preprocessing, extracting the connectivity matrix, graph representation methods, and the deep learning classification. These steps are schematically shown in <xref ref-type="fig" rid="F3">Figure 3</xref>.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>The steps of the proposed method, including the preprocessing, the 39 ROIs, the connectivity matrix, graph representation methods, and the deep learning classification.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g003.tif"/>
</fig>
<sec id="S3.SS1">
<title>3.1. Preprocessing and connectivity matrix</title>
<p>The rs-fMRI data are slice time corrected, motion corrected, registered, and normalized, using FSL software. The steps of preprocessing done for ABIDE I and ABIDE II databases are as follows: (1) AC-PC realignment, (2) gray matter and white matter tissue segmentation, (3) nonlinear registration to MNI152 space, (4) normalization, (5) resampling, (6) modulation, and (7) smoothing with FWMH = 4 mm. For the task of brain parcellation, the ICA method is used (<xref ref-type="bibr" rid="B8">de Martino et al., 2007</xref>; <xref ref-type="bibr" rid="B61">Tohka et al., 2008</xref>; <xref ref-type="bibr" rid="B58">Smith et al., 2009</xref>; <xref ref-type="bibr" rid="B26">Joel et al., 2011</xref>). In other words, instead of obtaining the average of the time series (BOLD signal) of some predefined regions, spatial maps output from ICA with the specific functional and anatomical interpretation (the locations of brain tissue acting synchronously and with the same activity pattern) is taken into account. ICA is a data-driven model, which uses no <italic>a priori</italic> information about the brain and has been a popular approach in the analysis of fMRI data (<xref ref-type="bibr" rid="B53">Salimi-Khorshidi et al., 2014</xref>). In this study, ICA decomposed the whole BOLD fMRI data into 39 regions according to MNI ATLAS.</p>
<p>Afterward, the BOLD signal of these 39 ROIs is considered to compute their connectivity measures, by statistical measures such as Pearson correlation, partial correlation (<xref ref-type="bibr" rid="B52">Saad et al., 2009</xref>), and tangent correlation (<xref ref-type="bibr" rid="B7">Dadi et al., 2019</xref>). The size of the connectivity matrix is 39&#x002A;39, according to the number of ROIs. The Pearson correlation coefficient ranges from 1 to &#x2212;1, where 1 indicates that two ROIs are highly correlated, and &#x2212;1 indicates that two ROIs are anticorrelated. This step is done using the Nilearn toolbox developed by MIT University, as well as the BrainIAK toolbox (<xref ref-type="bibr" rid="B34">Kumar et al., 2020</xref>). Nilearn is a python toolbox for statistical learning on neuroimaging data. In this study, the connectivity matrix is obtained <italic>via</italic> tangent correlation (<xref ref-type="bibr" rid="B46">Pedregosa et al., 2011</xref>). See <xref ref-type="supplementary-material" rid="DS1">Supplementary Appendix A</xref> for more details. This method is less frequently used but has solid mathematical foundations, and a variety of groups have reported good decoding performances with this framework. Connectivity matrices built with tangent space parametrization give an improvement compared to full or partial correlations.</p>
</sec>
<sec id="S3.SS2">
<title>3.2. Classifying the graph embedding vectors</title>
<p>According to the abovementioned embedding features, we used three scenarios to check whether ASD detection can be improved by graph embedding algorithms or not. In the first scenario, features are embedded vectors of the connectivity matrix using each of the Node2Vec, Struc2Vec, and AWE methods. Accordingly, a deep network with one channel input is used in this scenario. This channel input is an <italic>d</italic>&#x00D7;<italic>N</italic> matrix including the <italic>d</italic>-dimensional embedded vectors of all <italic>N</italic> = 39 nodes. The embedded vectors obtained by these methods have a dimension of <italic>d</italic> = 25, 64, and 128, respectively. But, in the second scenario, to take different properties of the Node2vec algorithm, with different <italic>p</italic> and <italic>q</italic> values (<italic>p</italic> = 1,<italic>q</italic> = 1, <italic>p</italic> = 1,<italic>q</italic> = 4, and <italic>p</italic> = 4,<italic>q</italic> = 1), into account, a three-channel deep network is applied. In the third scenario, after applying PCA on the result of the Node2Vec algorithm, two matrices of the Graph2Img algorithm are considered as input of a two-channel CNN. These three scenarios are schematically shown in <xref ref-type="fig" rid="F4">Figure 4</xref>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Three scenarios of applying embedding vectors to detect ASD <italic>via</italic> LeNet.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g004.tif"/>
</fig>
<p>Indeed, at first, we tried to do the classification job through traditional kernel-based classifiers, like support vector machine (SVM), but satisfactory results could not be obtained. The classifier could not show an accuracy better than chance. The advantage of CNN is that it is composed of an automatic feature extractor that again extracts features from the embedded vectors and, thus, is a trainable classifier.</p>
<p>In all three scenarios, we customize the LeNet structure for our problem: In the first scenario, there is one channel in the input layer, and the size of the embedded vector in each of the Node2vec, Struct2vec, and the AWE method determines the dimension of the input. These sizes are, respectively, equal to 25, 64, and 128.</p>
<p>Thus, in the first scenario, the input layer of LeNet is a 39&#x002A;<italic>d</italic>,<italic>d</italic> = 25,64,128 image. In the second scenario, the network has three channels. Each channel of the deep network consists of 39&#x002A;<italic>d</italic> neurons. In the third scenario, there are two channels, each consisting of 10&#x002A;10 neurons (<italic>r</italic> = 10). Finally, in all three scenarios, there are two output neurons indicating a healthy and autism brain.</p>
<p>The default LeNet network was modified according to the abovementioned dimensions of input/output. Furthermore, a dropout layer is employed for regularization at every hidden layer [33] with 0.8 keeping regularity. Another difference is the activation functions we used in LeNet are ReLU functions, except for the ultimate layer, which uses a softmax function in such a way that a probability distribution over classes would be obtained. For the convolution pooling block, we employ 64 filters at the first level, and as the signal is halved through the (2,2) max pooling layer, the number of filters in the subsequent convolutional layer is increased to 96 to compensate for the loss in resolution (<xref ref-type="bibr" rid="B60">Tixier et al., 2019</xref>). The number of trainable weights in this deep neural network doubles or triples in the third and second scenarios.</p>
<p>The illustrated networks are used as the healthy/ASD classifier. Classification results would be reported in <xref ref-type="supplementary-material" rid="DS1">Supplementary Appendix</xref> to compare them with previous research in which a deep network is used to classify the raw connectivity matrices.</p>
</sec>
<sec id="S3.SS3">
<title>3.3. Evaluation</title>
<p>To check the performance of our proposed ASD classifier working based on graph embedding techniques and deep machine learning methods, two kinds of cross-validation techniques are used. Indeed, these two techniques depend on how we choose training and test datasets. According to the properties of ABIDE database that consists of different sites, we can do three different partitioning jobs: (1) dividing data of each site into <italic>N</italic> folds, and reporting accuracy of classification in individual sites, (2) leave-one-site-out validation (distinctly for ABIDE I and II), and (3) dividing all data of ABIDE I and II into <italic>N</italic> folds to report typical <italic>N</italic>-fold cross-validation. In all three approaches, the classification performance is assessed by accuracy, F-score, recall, and precision. To report the accuracy of all data, statistically more reliable, the second approach, i.e., leave-one-site-out validation, is the most appropriate one. However, in this paper, the validation types (2) and (3) are considered in the report of the results.</p>
<p>Considering ASD detection as the goal of the classifier, true positive (TP) is defined as the percent of ASD subjects correctly classified as ASD. As well, the percent of ASD subjects classified as healthy is referred to as false negative (FN). Similarly, false positive (FP) is the percentage of healthy subjects decided to be ASD. Accordingly, the F-score measure is defined as follows:</p>
<disp-formula id="S3.Ex2">
<mml:math id="M2">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mn>2</mml:mn>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>It is important for the classifier to detect all ASD subjects, so <inline-formula><mml:math id="INEQ34"><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi mathvariant="normal">#</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>A</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>S</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mpadded width="+5pt"><mml:mi>D</mml:mi></mml:mpadded><mml:mo>&#x2062;</mml:mo><mml:mi>s</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>u</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>b</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>j</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>e</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>c</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>t</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>s</mml:mi></mml:mrow></mml:mfrac></mml:math></inline-formula> is referred to as recall. Also, it is expected for a classifier to have trusted positive detection, or in other words to be precise. Thus, precision is defined as <inline-formula><mml:math id="INEQ35"><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>P</mml:mi></mml:mrow><mml:mo>+</mml:mo><mml:mrow><mml:mi>F</mml:mi><mml:mo>&#x2062;</mml:mo><mml:mi>P</mml:mi></mml:mrow></mml:mrow></mml:mfrac></mml:math></inline-formula>. Because the size of subjects of two classes is not necessarily balanced, precision is a better measure of performance. Accordingly, another definition of F-score is based on recall and precision:</p>
<disp-formula id="S3.Ex3">
<mml:math id="M3">
<mml:mrow>
<mml:mrow>
<mml:mi>F</mml:mi>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>e</mml:mi>
</mml:mpadded>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mn>2</mml:mn>
</mml:mpadded>
<mml:mo rspace="5.8pt">&#x00D7;</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>l</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">&#x00D7;</mml:mo>
<mml:mi>p</mml:mi>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>At last, to check how many subjects are correctly labeled, accuracy is a well-known measure.</p>
<disp-formula id="S3.Ex4">
<mml:math id="M4">
<mml:mrow>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>r</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>y</mml:mi>
</mml:mpadded>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>P</mml:mi>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mi>T</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+5pt">
<mml:mi>l</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>e</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>t</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>On the other hand, the time cost of training the classifier is another measure of the method under evaluation.</p>
</sec>
</sec>
<sec id="S4" sec-type="results">
<title>4. Results</title>
<p>Results of three scenarios for ABIDE I and ABIDE II database are presented in <xref ref-type="table" rid="T1">Table 1</xref>, using the LeNet classifier. In the results of <xref ref-type="table" rid="T1">Table 1</xref>, validation of type 3 is considered: all subjects of each database are taken into account, and then 5-fold and 10-fold cross-validations are applied. The average accuracy of these folds is reported for each scenario. Scenario 2 achieved the best performance in which a mean classification accuracy of 64% (recall 0.77%, precision 0.73%) and 66% (recall 80%, precision 80%) is obtained for ABIDE I and ABIDE II, respectively (in 10-fold cross-validation). The range of accuracy values was between 52 and 69% in individual folds. Based on the literature, this is not better than <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>, <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>, and <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> in which 70.22, 70, and 76.4% accuracies are reported.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>The 5- and 10-fold cross-validation results using different embedding methods and CNN classifier (LeNet).</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Method</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">ABIDE II (5-fold)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">ABIDE I (5-fold)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">ABIDE II (10-fold)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">ABIDE I (10-fold)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Scenario 1</td>
<td valign="top" align="left">Struct2vec</td>
<td valign="top" align="center">54%</td>
<td valign="top" align="center">56%</td>
<td valign="top" align="center">56%</td>
<td valign="top" align="center">58%</td>
</tr>
<tr>
<td valign="top" align="left">Scenario 1</td>
<td valign="top" align="left">DeepWalk</td>
<td valign="top" align="center">55%</td>
<td valign="top" align="center">55%</td>
<td valign="top" align="center">56%</td>
<td valign="top" align="center">59%</td>
</tr>
<tr>
<td valign="top" align="left">Scenario 1</td>
<td valign="top" align="left">Node2vec <italic>p</italic> = 1,<italic>q</italic> = 4</td>
<td valign="top" align="center">59%</td>
<td valign="top" align="center">57%</td>
<td valign="top" align="center">62%</td>
<td valign="top" align="center">62%</td>
</tr>
<tr>
<td valign="top" align="left">Scenario 1</td>
<td valign="top" align="left">AWE</td>
<td valign="top" align="center">56%</td>
<td valign="top" align="center">58%</td>
<td valign="top" align="center">63%</td>
<td valign="top" align="center">65%</td>
</tr>
<tr>
<td valign="top" align="left">Scenario 2</td>
<td valign="top" align="left">Node2vec</td>
<td valign="top" align="center">63%</td>
<td valign="top" align="center">64%</td>
<td valign="top" align="center">66%</td>
<td valign="top" align="center">64%</td>
</tr>
<tr>
<td valign="top" align="left">Scenario 3</td>
<td valign="top" align="left">Graph2Img</td>
<td valign="top" align="center">59%</td>
<td valign="top" align="center">61%</td>
<td valign="top" align="center">66%</td>
<td valign="top" align="center">64%</td>
</tr>
</tbody>
</table></table-wrap>
<p>The results of <xref ref-type="table" rid="T1">Table 1</xref> show that the type of embedded features is effective in classification. But, as mentioned before, not given here, the results of SVM using embedded features are not better than those of <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref>, in which raw connectivity matrix has been used in classification <italic>via</italic> SVM. In other words, it seems that it is the art of deep network classifier in reaching (if any) good separation between ASD and healthy subjects, not the embedding features. So, the question is whether the feature embedding method was effective in ASD/healthy discrimination or not.</p>
<p>To answer this question, the results of the leave-one-site-out cross-validation are reported in <xref ref-type="table" rid="T2">Tables 2</xref>&#x2013;<xref ref-type="table" rid="T5">5</xref>, respectively, for scenario 1 using AWE, scenario 2 using Node2vec, and scenario 3 using Graph2Img. In this validation type, just AWE of scenario 1 is applied, due to its better performance in the k-fold cross-validation procedure, against other embedding techniques. For each site, the LeNet CNN classifier is trained by data of other sites in each database and has been tested on data of that site. Results of the ABIDE I and ABIDE II are distinctly presented. The number of subjects at each site, number of ASD subjects, accuracy, and F-score of the proposed techniques, as well as those of <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>, <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>, and <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> (just for ABIDE I), are reported in the tables.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Leave-site-out cross-validation results using scenario 1 just by AWE and CNN classifier.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Sites</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># Subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># ASD subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ABIDE I</td>
<td valign="top" align="left">UCLA-2</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">11</td>
<td valign="top" align="center">0.90</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">TRINITY</td>
<td valign="top" align="center">49</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.89</td>
<td valign="top" align="center">0.88</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-2</td>
<td valign="top" align="center">35</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">YALE</td>
<td valign="top" align="center">41</td>
<td valign="top" align="center">22</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">PITT</td>
<td valign="top" align="center">50</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OLIN</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.58</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-2</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">STANFORD</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">172</td>
<td valign="top" align="center">74</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.74</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-1</td>
<td valign="top" align="center">86</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA-1</td>
<td valign="top" align="center">64</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.57</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.73</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">MAX-MUN</td>
<td valign="top" align="center">46</td>
<td valign="top" align="center">19</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.46</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.67</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-1</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">67</td>
<td valign="top" align="center">43</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.85</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SBL</td>
<td valign="top" align="center">26</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.66</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">36</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.80</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td valign="top" align="left">ABIDE II</td>
<td valign="top" align="left">BNI</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">29</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.77</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">EMC</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.83</td>
<td valign="top" align="center">0.82</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ETH</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.65</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">GU</td>
<td valign="top" align="center">106</td>
<td valign="top" align="center">51</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.71</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IP</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.80</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IU</td>
<td valign="top" align="center">40</td>
<td valign="top" align="center">20</td>
<td valign="top" align="center">0.91</td>
<td valign="top" align="center">0.88</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">211</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.84</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">77</td>
<td valign="top" align="center">48</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.61</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">93</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.69</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ONRC</td>
<td valign="top" align="center">59</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.79</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.73</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SU</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.75</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">TCD</td>
<td valign="top" align="center">42</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.69</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCD</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">18</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.68</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA</td>
<td valign="top" align="center">32</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.82</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">17</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.79</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.75</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Leave-site-out cross-validation results using scenario 2 (Node2vec) and CNN classifier.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Sites</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># Subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># ASD subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ABIDE I</td>
<td valign="top" align="left">UCLA-2</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">11</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">TRINITY</td>
<td valign="top" align="center">44</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-2</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">YALE</td>
<td valign="top" align="center">41</td>
<td valign="top" align="center">22</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">PITT</td>
<td valign="top" align="center">50</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OLIN</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.87</td>
<td valign="top" align="center">0.83</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.58</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-2</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">STANFORD</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">172</td>
<td valign="top" align="center">74</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.58</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.74</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-1</td>
<td valign="top" align="center">86</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA-1</td>
<td valign="top" align="center">64</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.54</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.57</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.73</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">MAX-MUN</td>
<td valign="top" align="center">46</td>
<td valign="top" align="center">19</td>
<td valign="top" align="center">0.60</td>
<td valign="top" align="center">0.53</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.46</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.67</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-1</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">67</td>
<td valign="top" align="center">43</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.85</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SBL</td>
<td valign="top" align="center">26</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.66</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.80</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td valign="top" align="left">ABIDE II</td>
<td valign="top" align="left">BNI</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">29</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.60</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">EMC</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.77</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ETH</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.65</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">GU</td>
<td valign="top" align="center">106</td>
<td valign="top" align="center">51</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.59</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IP</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.69</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IU</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">20</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.67</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.70</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">77</td>
<td valign="top" align="center">48</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.61</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">93</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.56</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ONRC</td>
<td valign="top" align="center">49</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.75</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.58</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SU</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.64</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">TCD</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.74</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCD</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">18</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.72</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA</td>
<td valign="top" align="center">32</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.76</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">17</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.64</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.66</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>Leave-site-out cross-validation results using scenario 3 (Graph2Img) and CNN classifier.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Sites</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># Subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;"># ASD subjects</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">F-score (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy (<xref ref-type="bibr" rid="B24">Huang et al., 2020</xref>)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">ABIDE I</td>
<td valign="top" align="left">UCLA-2</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">11</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">TRINITY</td>
<td valign="top" align="center">44</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-2</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">YALE</td>
<td valign="top" align="center">41</td>
<td valign="top" align="center">22</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">PITT</td>
<td valign="top" align="center">50</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OLIN</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.85</td>
<td valign="top" align="center">0.81</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.58</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.76</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-2</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">STANFORD</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.94</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.79</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">172</td>
<td valign="top" align="center">74</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.58</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.74</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UM-1</td>
<td valign="top" align="center">86</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.77</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA-1</td>
<td valign="top" align="center">64</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.92</td>
<td valign="top" align="center">0.90</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.57</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.73</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">MAX-MUN</td>
<td valign="top" align="center">46</td>
<td valign="top" align="center">19</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.68</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.46</td>
<td valign="top" align="center">0.48</td>
<td valign="top" align="center">0.67</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">LEUVEN-1</td>
<td valign="top" align="center">28</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.82</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.81</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">67</td>
<td valign="top" align="center">43</td>
<td valign="top" align="center">0.78</td>
<td valign="top" align="center">0.73</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.69</td>
<td valign="top" align="center">0.85</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SBL</td>
<td valign="top" align="center">26</td>
<td valign="top" align="center">12</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.71</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.56</td>
<td valign="top" align="center">0.62</td>
<td valign="top" align="center">0.66</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">27</td>
<td valign="top" align="center">14</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.80</td>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.65</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.61</td>
<td valign="top" align="center">0.78</td>
</tr>
<tr>
<td valign="top" align="left">EMCABIDE II</td>
<td valign="top" align="left">BNI</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">29</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.60</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">EMC</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">25</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.77</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ETH</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">13</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.65</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">GU</td>
<td valign="top" align="center">106</td>
<td valign="top" align="center">51</td>
<td valign="top" align="center">0.64</td>
<td valign="top" align="center">0.59</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IP</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.74</td>
<td valign="top" align="center">0.69</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">IU</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">20</td>
<td valign="top" align="center">0.75</td>
<td valign="top" align="center">0.67</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">KKI</td>
<td valign="top" align="center">34</td>
<td valign="top" align="center">56</td>
<td valign="top" align="center">0.79</td>
<td valign="top" align="center">0.70</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">NYU</td>
<td valign="top" align="center">77</td>
<td valign="top" align="center">48</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.61</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">OHSU</td>
<td valign="top" align="center">93</td>
<td valign="top" align="center">37</td>
<td valign="top" align="center">0.63</td>
<td valign="top" align="center">0.56</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">ONRC</td>
<td valign="top" align="center">49</td>
<td valign="top" align="center">24</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.75</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SDSU</td>
<td valign="top" align="center">58</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">0.67</td>
<td valign="top" align="center">0.58</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">SU</td>
<td valign="top" align="center">54</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.64</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">TCD</td>
<td valign="top" align="center">39</td>
<td valign="top" align="center">21</td>
<td valign="top" align="center">0.76</td>
<td valign="top" align="center">0.74</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCD</td>
<td valign="top" align="center">31</td>
<td valign="top" align="center">18</td>
<td valign="top" align="center">0.77</td>
<td valign="top" align="center">0.72</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">UCLA</td>
<td valign="top" align="center">32</td>
<td valign="top" align="center">16</td>
<td valign="top" align="center">0.80</td>
<td valign="top" align="center">0.76</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">USM</td>
<td valign="top" align="center">33</td>
<td valign="top" align="center">17</td>
<td valign="top" align="center">0.70</td>
<td valign="top" align="center">0.64</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="left">Mean</td>
<td/>
<td/>
<td valign="top" align="center">0.72</td>
<td valign="top" align="center">0.66</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table></table-wrap>
<table-wrap position="float" id="T5">
<label>TABLE 5</label>
<caption><p>Summary of best performance values and computational time for ABIDE I, in comparison to literature.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;"></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Mean accuracy (10-fold cross-validation)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Mean accuracy (leave-one-site-out)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Computation time</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref></td>
<td valign="top" align="center">70%</td>
<td valign="top" align="center">65%</td>
<td valign="top" align="center">Over 32 h</td>
</tr>
<tr>
<td valign="top" align="left"><xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref></td>
<td valign="top" align="center">70.22%</td>
<td valign="top" align="center">63%</td>
<td valign="top" align="center">12 h 30 min</td>
</tr>
<tr>
<td valign="top" align="left"><xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref></td>
<td valign="top" align="center">76.4%</td>
<td valign="top" align="center">78.2%</td>
<td valign="top" align="center">96 s</td>
</tr>
<tr>
<td valign="top" align="left"><xref ref-type="bibr" rid="B6">Chen et al. (2022)</xref></td>
<td valign="top" align="center">74.7%</td>
<td valign="top" align="center">69.42%</td>
<td valign="top" align="center">&#x2212;</td>
</tr>
<tr>
<td valign="top" align="left">Proposed method</td>
<td valign="top" align="center">66%</td>
<td valign="top" align="center">80%</td>
<td valign="top" align="center">2 h 47 min 20 s</td>
</tr>
</tbody>
</table></table-wrap>
<p>As shown in <xref ref-type="fig" rid="F5">Figure 5</xref>, compared to <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>, <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>, and <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref>, which achieved the best results in the literature so far, the results depict that the Graph2Img-based CNN can outperform the other supervised methods. From this point of view, these results are in favor of embedding features, not just the deep network.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Box plot of leave-out-site accuracy to compare different embedding scenarios and <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>; <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>; <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> <italic>vs</italic>. sites, for ABIDE I dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g005.tif"/>
</fig>
<p>Therefore, some points worth considering in the results of these two validation methods:</p>
<list list-type="simple">
<list-item>
<label>1.</label>
<p>Embedding features could not improve the results of the k-fold cross-validation but is able to improve the results of leave-one-site-out one.</p>
</list-item>
<list-item>
<label>2.</label>
<p>The accuracy of CNN in classifying ASD subjects of each site is different when using graph embedding methods. On average, all embedding scenarios could improve the results, in comparison to using the raw connectivity matrices, in the leave-site-out validation manner (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>; <xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>). The best embedding technique seems to be Graph2Img that increases the 65% (<xref ref-type="bibr" rid="B22">Heinsfeld et al., 2018</xref>) and 63% (<xref ref-type="bibr" rid="B56">Sherkatghanad et al., 2020</xref>) results to 80%. In our studies, the belief network of <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref> with 78.2% mean accuracy is the main rival of Graph2Img from the leave-one-site-out validation point of view that it also works based on embedding features, as well as a graph-based feature selection method.</p>
</list-item>
<list-item>
<label>3.</label>
<p>Each graph embedding scenario has significantly improved the results of some sites, but not all of the sites.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>The AWE technique is not successful in the fMRI data of the University of Utah School of Medicine (USM), for which <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref> and <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> act well. For YALE University (YALE), the University of Leuven (LEUVEN), Stanford University (STANFORD), the University of Michigan (UM-1), and the University of California, Los Angeles (UCLA-1), <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref> reached better results than AWE.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>As well, <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref> and/or <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> outperform the embedding scenario 2 (three-channel node2vec with three values of <italic>p</italic> and <italic>q</italic>) in Kennedy Krieger Institute, Baltimore (KKI) data, New York University Langone Medical Center (NYU), Ludwig Maximilian University Munich (MAX-MUN), USM, and San Diego State University (SDSU) data. Almost at all sites of ABIDE I, scenario 2 reached less accuracy, in comparison to <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>, except Olin, Institute of Living, Hartford Hospital (OLIN), STANFORD, Oregon Health and Science University (OHSU), LEUVEN, and Social Brain Lab BCN NIC UMC Groningen and Netherlands Institute for Neurosciences (SBL).</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>Even, for the embedding scenario 3 (i.e., Graph2Img), there is a site for which the accuracy of ASD classification is lower than <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref>. The case is worse for the Huang method, which works better than Graph2Img for the University of Pittsburgh School of Medicine (PITT), LEUVEN, NYU, UM-1, UCLA-1, and USM. However, the average accuracy of the leave-one-site-out validation of Graph2Img (80%) is more than that of <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref> with 78%.</p>
</list-item>
<list-item>
<label>&#x2022;</label>
<p>It seems scenarios 1 and 3 are consistent with each other, but scenario 2 is different. In the sites for which scenarios 1 and 3 obtain good results, scenario 2 does not succeed. Maybe, these methods represent different features of the graph. It is predicted that their combination would reach a good classification performance. Also, Graph2Img can be combined to use their seemingly complemental advantages.</p>
</list-item>
</list>
<list list-type="simple">
<list-item>
<label>4.</label>
<p>For the ABIDE II database, scenario 1 (AWE method) reached the best mean accuracy. The best individual site result also is dedicated to the AWE method for the KKI database.</p>
</list-item>
</list>
<p>However, the most dominant advantage of the proposed algorithm is its training time. Using a system with two Intel Xeon E5-2620 processors with 24 cores running at 2 GHz and 48 GB of RAM. As well, 1 Tesla K40 GPU with 2880 CUDA cores and 12 GB of RAM was used to accelerate training. In such a way, the entire training time took about 200 min. In <xref ref-type="table" rid="T5">Table 5</xref>, the training time of <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref>, <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref>, and our proposed method is compared. This achievement is due to the dimension reduction property of the graph embedding methods, decreasing the dimensionality of the CNN input.</p>
<p>The results show that the proposed algorithm, using embedded vectors of connectivity graph, and the CNN classifier, outperforms the previous studies in the identification of autism spectrum disorder, from both speed and accuracy points of view.</p>
<p>Since the functioning of the brain is accompanied by interactions and connections between different functional areas, discrimination of healthy and autism behaviors could be done by assessment of the brain network dynamics (<xref ref-type="bibr" rid="B30">Kim et al., 2017</xref>). Indeed, cognitive disorders emerge because of the alteration of dynamic relationships between pairs of specific brain regions. However, we claim that a powerful learning method considering the coupling, similarity, or causality and synchronizing intensity between specific brain regions could be able to detect cognitive impairment.</p>
<p>For a complete comparison, we considered the literature where a functional connectivity matrix is used to discriminant between healthy and autistic subjects, based on ABIDE database, either using conventional classifiers, deep networks, or even statistical tests. The result of these comparisons is shown in <xref ref-type="fig" rid="F5">Figure 5</xref>, illustrating that embedded features achieved better results than other feature extraction methods, and as well, deep neural networks hold much greater promises than conventional classifiers. In other words, since AWE (scenario 1), Graph2Img (scenario 3), and multi-parameter Node2Vec (scenario 2) algorithms gain better classification results with CNN classifier (in leave-one-site-out validation), we claim that embedded features involving the structure of the functional connectivity of brain could be more convenient in ASD detection. The classification results (in k-fold cross-validation), although not high (66 and 64% for ABIDE I and II) enough to be appropriate for clinical usage, show that there are strong alterations in brain connections during autism disorder.</p>
<p>Our 10-fold cross-validation best average result is 66% compared to <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref> which is about 70%. Instead, as shown in <xref ref-type="table" rid="T2">Tables 2</xref>&#x2013;<xref ref-type="table" rid="T4">4</xref>, and <xref ref-type="fig" rid="F5">Figures 5</xref>, <xref ref-type="fig" rid="F6">6</xref>, for leave-one-site-out, both the mean accuracy over sites, and the most of individual accuracy of sites, our proposed method is clearly much better than <xref ref-type="bibr" rid="B22">Heinsfeld et al. (2018)</xref> and <xref ref-type="bibr" rid="B56">Sherkatghanad et al. (2020)</xref> and a little better than <xref ref-type="bibr" rid="B24">Huang et al. (2020)</xref>. These results show that, as the sample size decreases (5-fold, 10-fold cross-validation results, and leave-one-site-out), the gap between the performance of the embedding vectors and the raw connectivity matrix increases. This implies that using embedding vectors is an effective idea, but still needs more investigation to find the more suitable graph representation method. The reason is clearly the intrinsic complexity of brain function.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Box plot of leave-out-site accuracy to compare different embedding scenarios <italic>vs</italic>. sites, for ABIDE II dataset.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fnsys-16-904770-g006.tif"/>
</fig>
</sec>
<sec id="S5" sec-type="conclusion">
<title>5. Conclusion and future work</title>
<p>There are two messages in the obtained results: First, the intrinsic phenotypical properties of subjects within each site lead to a specific structure in their connectivity graph, in addition to the distinct indicator of ASD/healthy. Different embedding techniques acquire some of these properties. Second, a suitable combination of graph embedding techniques is the alternative approach to take all graph similarities in the ASD group regardless of the phenotypes.</p>
<p>The better mean accuracy of the leave-one-site-out validation technique compared to that of k-fold cross-validation again tells us about the variance of the graph structures between sites due to the within-site phenotypes. In such a way, in a random group, finding the common structures just relevant to ASD would be too difficult for an embedding technique. It is the main reason that prevents the embedding techniques capture a better result than the raw connectivity matrices.</p>
<p>Another interesting result is the difference in various techniques in the sites in which they can successfully detect ASD/normal situations. This point ensures us about a combinational technique, gathering all characteristics of them, to get a biomarker of ASD.</p>
<p>In this article, we showed that by using structural graph representation algorithms, it is possible to classify subject groups based on the connectivity fingerprints of brain regions. Therefore, our idea to use the information of node structures as a new and low-dimensional source might increase classification performance. However, such dimension reduction may lead to more ambiguity about the place of alteration in the connectivity matrix. In other words, we did not analyze the results to obtain knowledge about these alterations were of what kind, and where they occur. This is the drawback of our proposed algorithm we could not identify ROIs that alter connectivity strength values. In fact, the main point of a suitable embedding algorithm for brain network is that the representations that emerge would be neurobiologically plausible and meaningful. From this point of view, we can predict the mechanism and cause underlying an impaired brain network during mental disorders. This is our future concern.</p>
</sec>
<sec id="S6" sec-type="data-availability">
<title>Data availability statement</title>
<p>Publicly available datasets were analyzed in this study. This data can be found here: <ext-link ext-link-type="uri" xlink:href="http://fcon_1000.projects.nitrc.org/indi/abide/">http://fcon_1000.projects.nitrc.org/indi/abide/</ext-link>.</p>
</sec>
<sec id="S7" sec-type="ethics-statement">
<title>Ethics statement</title>
<p>Written informed consent was obtained from the individual(s), and minor(s)&#x2019; legal guardian/next of kin, for the publication of any potentially identifiable images or data included in this article.</p>
</sec>
<sec id="S8" sec-type="author-contributions">
<title>Author contributions</title>
<p>ZM and FS conceived of the presented idea. AY performed the computations. FS verified the analytical methods and wrote the manuscript. ZM and FS encouraged AY to investigate and supervised the findings of this work. All authors discussed the results and contributed to the final manuscript.</p>
</sec>
</body>
<back>
<sec id="S9" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="S10" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec id="S11" sec-type="supplementary-material">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnsys.2022.904770/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fnsys.2022.904770/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.docx" id="DS1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<fn-group>
<fn id="footnote1">
<label>1</label>
<p><ext-link ext-link-type="uri" xlink:href="http://fcon_1000.projects.nitrc.org/indi/abide/">http://fcon_1000.projects.nitrc.org/indi/abide/</ext-link></p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Hiyali</surname> <given-names>M. I.</given-names></name> <name><surname>Yahya</surname> <given-names>N.</given-names></name> <name><surname>Faye</surname> <given-names>I.</given-names></name> <name><surname>Hussein</surname> <given-names>A. F.</given-names></name></person-group> (<year>2021</year>). <article-title>Identification of autism subtypes based on wavelet coherence of BOLD FMRI signals using convolutional neural network.</article-title> <source><italic>Sensors</italic></source> <volume>21</volume>:<issue>5256</issue>. <pub-id pub-id-type="doi">10.3390/s21165256</pub-id> <pub-id pub-id-type="pmid">34450699</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Almuqhim</surname> <given-names>F.</given-names></name> <name><surname>Saeed</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>ASD-SAENet: A sparse autoencoder, and deep-neural network model for detecting autism spectrum disorder (ASD) using fMRI data.</article-title> <source><italic>Front. Comput. Neurosci.</italic></source> <volume>15</volume>:<issue>654315</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2021.654315</pub-id> <pub-id pub-id-type="pmid">33897398</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Banka</surname> <given-names>A.</given-names></name> <name><surname>Rekik</surname> <given-names>I.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Adversarial connectome embedding for mild cognitive impairment identification using cortical morphological networks</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics)</italic></source>, (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer Science+Business Media</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-030-32391-2_8</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bessadok</surname> <given-names>A.</given-names></name> <name><surname>Mahjoub</surname> <given-names>M. A.</given-names></name> <name><surname>Rekik</surname> <given-names>I.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Symmetric dual adversarial connectomic domain alignment for predicting isomorphic brain graph from a baseline graph</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics)</italic></source>, (<publisher-name>MICCAI</publisher-name>: <publisher-loc>Shenzhen</publisher-loc>). <pub-id pub-id-type="doi">10.1007/978-3-030-32251-9_51</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brier</surname> <given-names>M. R.</given-names></name> <name><surname>Mitra</surname> <given-names>A.</given-names></name> <name><surname>McCarthy</surname> <given-names>J. E.</given-names></name> <name><surname>Ances</surname> <given-names>B. M.</given-names></name> <name><surname>Snyder</surname> <given-names>A. Z.</given-names></name></person-group> (<year>2015</year>). <article-title>Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.</article-title> <source><italic>Neuroimage</italic></source> <volume>121</volume> <fpage>29</fpage>&#x2013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.07.039</pub-id> <pub-id pub-id-type="pmid">26208872</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Yan</surname> <given-names>J.</given-names></name> <name><surname>Jiang</surname> <given-names>M.</given-names></name> <name><surname>Zhang</surname> <given-names>T.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>W.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>Adversarial learning based node-edge graph attention networks for autism spectrum disorder identification.</article-title> <source><italic>IEEE Trans. Neural Netw. Learn. Syst.</italic></source> <volume>33</volume> <fpage>1</fpage>&#x2013;<lpage>22</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2022.3154755</pub-id> <pub-id pub-id-type="pmid">35286265</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dadi</surname> <given-names>K.</given-names></name> <name><surname>Rahim</surname> <given-names>M.</given-names></name> <name><surname>Abraham</surname> <given-names>A.</given-names></name> <name><surname>Chyzhyk</surname> <given-names>D.</given-names></name> <name><surname>Milham</surname> <given-names>M.</given-names></name> <name><surname>Thirion</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Benchmarking functional connectome-based predictive models for resting-state fMRI.</article-title> <source><italic>Neuroimage</italic></source> <volume>192</volume> <fpage>115</fpage>&#x2013;<lpage>134</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2019.02.062</pub-id> <pub-id pub-id-type="pmid">30836146</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>de Martino</surname> <given-names>F.</given-names></name> <name><surname>Gentile</surname> <given-names>F.</given-names></name> <name><surname>Esposito</surname> <given-names>F.</given-names></name> <name><surname>Balsi</surname> <given-names>M.</given-names></name> <name><surname>di Salle</surname> <given-names>F.</given-names></name> <name><surname>Goebel</surname> <given-names>R.</given-names></name><etal/></person-group> (<year>2007</year>). <article-title>Classification of fMRI independent components using IC-fingerprints and support vector machine classifiers.</article-title> <source><italic>Neuroimage</italic></source> <volume>34</volume> <fpage>177</fpage>&#x2013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2006.08.041</pub-id> <pub-id pub-id-type="pmid">17070708</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>di Martino</surname> <given-names>A.</given-names></name> <name><surname>Yan</surname> <given-names>C. G.</given-names></name> <name><surname>Li</surname> <given-names>Q.</given-names></name> <name><surname>Denio</surname> <given-names>E.</given-names></name> <name><surname>Castellanos</surname> <given-names>F. X.</given-names></name> <name><surname>Alaerts</surname> <given-names>K.</given-names></name><etal/></person-group> (<year>2014</year>). <article-title>The autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in autism.</article-title> <source><italic>Mol. Psychiatry</italic></source> <volume>19</volume> <fpage>659</fpage>&#x2013;<lpage>667</lpage>. <pub-id pub-id-type="doi">10.1038/mp.2013.78</pub-id> <pub-id pub-id-type="pmid">23774715</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Donnat</surname> <given-names>C.</given-names></name> <name><surname>Zitnik</surname> <given-names>M.</given-names></name> <name><surname>Hallac</surname> <given-names>D.</given-names></name> <name><surname>Leskovec</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Learning structural node embeddings via diffusion wavelets</article-title>,&#x201D; in <source><italic>Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>). <pub-id pub-id-type="doi">10.1145/3219819.3220025</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>El Gazzar</surname> <given-names>A.</given-names></name> <name><surname>Cerliani</surname> <given-names>L.</given-names></name> <name><surname>van Wingen</surname> <given-names>G.</given-names></name> <name><surname>Thomas</surname> <given-names>R. M.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Simple 1-D convolutional networks for resting-state fMRI based classification in autism</article-title>,&#x201D; in <source><italic>Proceeding of the 2019 international joint conference on neural networks (IJCNN)</italic></source>, (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>6</lpage>.</citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Elder</surname> <given-names>J. H.</given-names></name> <name><surname>Kreider</surname> <given-names>C. M.</given-names></name> <name><surname>Brasher</surname> <given-names>S. N.</given-names></name> <name><surname>Ansell</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Clinical impact of early diagnosis of autism on the prognosis and parent-child relationships.</article-title> <source><italic>Psychol. Res. Behav. Manag.</italic></source> <volume>10</volume> <fpage>283</fpage>&#x2013;<lpage>292</lpage>. <pub-id pub-id-type="doi">10.2147/PRBM.S117499</pub-id> <pub-id pub-id-type="pmid">28883746</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eslami</surname> <given-names>T.</given-names></name> <name><surname>Saeed</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Auto-AsD-Network: A technique based on deep learning and support vector machines for diagnosing autism spectrum disorder using fMRI data</article-title>,&#x201D; in <source><italic>ACM-BCB 2019 - Proceedings of the 10th ACM international conference on bioinformatics, computational biology and health informatics</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Association for Computing Machinery</publisher-name>). <pub-id pub-id-type="doi">10.1145/3307339.3343482</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>Y.</given-names></name> <name><surname>You</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Ji</surname> <given-names>R.</given-names></name> <name><surname>Gao</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>Hypergraph neural networks.</article-title> <source><italic>Proc. Conf. AAAI Artif. Intell.</italic></source> <volume>33</volume> <fpage>3558</fpage>&#x2013;<lpage>3565</lpage>.</citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fletcher</surname> <given-names>P. T.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name></person-group> (<year>2007</year>). <article-title>Riemannian geometry for the statistical analysis of diffusion tensor data.</article-title> <source><italic>Signal Proc.</italic></source> <volume>87</volume> <fpage>250</fpage>&#x2013;<lpage>262</lpage>. <pub-id pub-id-type="doi">10.1016/j.sigpro.2005.12.018</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Frye</surname> <given-names>R. E.</given-names></name> <name><surname>Vassall</surname> <given-names>S.</given-names></name> <name><surname>Kaur</surname> <given-names>G.</given-names></name> <name><surname>Lewis</surname> <given-names>C.</given-names></name> <name><surname>Karim</surname> <given-names>M.</given-names></name> <name><surname>Rossignol</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Emerging biomarkers in autism spectrum disorder: A systematic review.</article-title> <source><italic>Ann. Trans. Med.</italic></source> <volume>7</volume>:<issue>792</issue>.</citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>H.</given-names></name> <name><surname>Ji</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Graph U-nets</article-title>,&#x201D; in <source><italic>proceeding of the 36th international conference on machine learning, ICML 2019</italic></source>, (<publisher-loc>Long Beach, CA</publisher-loc>: <publisher-name>Long Beach Convention Center</publisher-name>).</citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grover</surname> <given-names>A.</given-names></name> <name><surname>Leskovec</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Node2vec: Scalable feature learning for networks</article-title>,&#x201D; in <source><italic>Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining</italic></source>, (<publisher-loc>San Francisco</publisher-loc>), <fpage>855</fpage>&#x2013;<lpage>864</lpage>. <pub-id pub-id-type="doi">10.1145/2939672.2939754</pub-id> <pub-id pub-id-type="pmid">27853626</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>Y.</given-names></name> <name><surname>Ding</surname> <given-names>G.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Han</surname> <given-names>J.</given-names></name> <name><surname>Shao</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>Learning to hash with optimized anchor embedding for scalable retrieval.</article-title> <source><italic>IEEE Trans. Image Proc.</italic></source> <volume>26</volume> <fpage>1344</fpage>&#x2013;<lpage>1354</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2017.2652730</pub-id> <pub-id pub-id-type="pmid">28092559</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hamilton William</surname> <given-names>L.</given-names></name> <name><surname>Ying</surname> <given-names>R.</given-names></name> <name><surname>Leskovec</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Representation learning on graphs: Methods and applications.</article-title> <source><italic>Arxiv.Org</italic></source> [<comment>Preprint</comment>].</citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>Z.</given-names></name> <name><surname>Du</surname> <given-names>L.</given-names></name> <name><surname>Huang</surname> <given-names>Y.</given-names></name> <name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Lv</surname> <given-names>J.</given-names></name> <name><surname>Guo</surname> <given-names>L.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>Gyral hinges account for the highest cost and the highest communication capacity in a corticocortical network.</article-title> <source><italic>Cereb. Cortex</italic></source> <volume>32</volume> <fpage>3359</fpage>&#x2013;<lpage>3376</lpage>.</citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Heinsfeld</surname> <given-names>A. S.</given-names></name> <name><surname>Franco</surname> <given-names>A. R.</given-names></name> <name><surname>Craddock</surname> <given-names>R. C.</given-names></name> <name><surname>Buchweitz</surname> <given-names>A.</given-names></name> <name><surname>Meneguzzi</surname> <given-names>F.</given-names></name></person-group> (<year>2018</year>). <article-title>Identification of autism spectrum disorder using deep learning and the ABIDE dataset.</article-title> <source><italic>Neuroimage Clin.</italic></source> <volume>17</volume> <fpage>16</fpage>&#x2013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1016/j.nicl.2017.08.017</pub-id> <pub-id pub-id-type="pmid">29034163</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>H.</given-names></name> <name><surname>Hu</surname> <given-names>X.</given-names></name> <name><surname>Han</surname> <given-names>J.</given-names></name> <name><surname>Lv</surname> <given-names>J.</given-names></name> <name><surname>Liu</surname> <given-names>N.</given-names></name> <name><surname>Guo</surname> <given-names>L.</given-names></name><etal/></person-group> (<year>2016</year>). &#x201C;<article-title>Latent source mining in FMRI data via deep neural network</article-title>,&#x201D; in <source><italic>Proceedings - international symposium on biomedical imaging</italic></source>, (<publisher-loc>Prague</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/ISBI.2016.7493348</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>Z. A.</given-names></name> <name><surname>Zhu</surname> <given-names>Z.</given-names></name> <name><surname>Yau</surname> <given-names>C. H.</given-names></name> <name><surname>Tan</surname> <given-names>K. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Identifying autism spectrum disorder from resting-state fMRI using deep belief network.</article-title> <source><italic>IEEE Trans. Neural Netw. Learn. Syst.</italic></source> <volume>32</volume> <fpage>2847</fpage>&#x2013;<lpage>2861</lpage>. <pub-id pub-id-type="doi">10.1109/TNNLS.2020.3007943</pub-id> <pub-id pub-id-type="pmid">32692687</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ivanov</surname> <given-names>S.</given-names></name> <name><surname>Burnaev</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>Anonymous walk embeddings. 35th international conference on machine learning.</article-title> <source><italic>ICML</italic></source> <volume>2018</volume>:<issue>5</issue>.</citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Joel</surname> <given-names>S. E.</given-names></name> <name><surname>Caffo</surname> <given-names>B. S.</given-names></name> <name><surname>van Zijl</surname> <given-names>P. C. M.</given-names></name> <name><surname>Pekar</surname> <given-names>J. J.</given-names></name></person-group> (<year>2011</year>). <article-title>On the relationship between seed-based and ICA-based measures of functional connectivity.</article-title> <source><italic>Magnetic Res. Med.</italic></source> <volume>66</volume> <fpage>644</fpage>&#x2013;<lpage>657</lpage>. <pub-id pub-id-type="doi">10.1002/mrm.22818</pub-id> <pub-id pub-id-type="pmid">21394769</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kazeminejad</surname> <given-names>A.</given-names></name> <name><surname>Sotero</surname> <given-names>R. C.</given-names></name></person-group> (<year>2020</year>). <article-title>The Importance of Anti-correlations in Graph Theory Based Classification of Autism Spectrum Disorder.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>14</volume>:<issue>676</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2020.00676</pub-id> <pub-id pub-id-type="pmid">32848533</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khosla</surname> <given-names>M.</given-names></name> <name><surname>Setty</surname> <given-names>V.</given-names></name> <name><surname>Anand</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>A comparative study for unsupervised network representation learning.</article-title> <source><italic>IEEE Trans. Knowl. Data Eng.</italic></source> <volume>33</volume> <fpage>1807</fpage>&#x2013;<lpage>1818</lpage>. <pub-id pub-id-type="doi">10.1109/TKDE.2019.2951398</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>J.</given-names></name> <name><surname>Calhoun</surname> <given-names>V. D.</given-names></name> <name><surname>Shim</surname> <given-names>E.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name></person-group> (<year>2016</year>). <article-title>Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.</article-title> <source><italic>Neuroimage</italic></source> <volume>124</volume> <fpage>127</fpage>&#x2013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2015.05.018</pub-id> <pub-id pub-id-type="pmid">25987366</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>J.</given-names></name> <name><surname>Criaud</surname> <given-names>M.</given-names></name> <name><surname>Cho</surname> <given-names>S. S.</given-names></name> <name><surname>D&#x00ED;ez-Cirarda</surname> <given-names>M.</given-names></name> <name><surname>Mihaescu</surname> <given-names>A.</given-names></name> <name><surname>Coakeley</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>Abnormal intrinsic brain functional network dynamics in Parkinson&#x2019;s disease.</article-title> <source><italic>Brain</italic></source> <volume>140</volume> <fpage>2955</fpage>&#x2013;<lpage>2967</lpage>. <pub-id pub-id-type="doi">10.1093/brain/awx233</pub-id> <pub-id pub-id-type="pmid">29053835</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kipf</surname> <given-names>T. N.</given-names></name> <name><surname>Welling</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>Semi-supervised classification with graph convolutional networks</article-title>,&#x201D; in <source><italic>proceeding of the 5th international conference on learning representations, ICLR 2017 - conference track proceedings</italic></source>, (<publisher-loc>Toulon</publisher-loc>).</citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ktena</surname> <given-names>S. I.</given-names></name> <name><surname>Parisot</surname> <given-names>S.</given-names></name> <name><surname>Ferrante</surname> <given-names>E.</given-names></name> <name><surname>Rajchl</surname> <given-names>M.</given-names></name> <name><surname>Lee</surname> <given-names>M.</given-names></name> <name><surname>Glocker</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2017</year>). &#x201C;<article-title>Distance metric learning using graph convolutional networks: Application to functional brain networks</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 10433 LNCS</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Descoteaux</surname> <given-names>M.</given-names></name> <name><surname>Maier-Hein</surname> <given-names>L.</given-names></name> <name><surname>Franz</surname> <given-names>A.</given-names></name> <name><surname>Jannin</surname> <given-names>P.</given-names></name> <name><surname>Collins</surname> <given-names>D.</given-names></name> <name><surname>Duchesne</surname> <given-names>S.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-319-66182-7_54</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuang</surname> <given-names>D.</given-names></name> <name><surname>Guo</surname> <given-names>X.</given-names></name> <name><surname>An</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>He</surname> <given-names>L.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>Discrimination of ADHD based on fMRI data with deep belief network</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), 8590 LNBI</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Huang</surname> <given-names>D.</given-names></name> <name><surname>Han</surname> <given-names>K.</given-names></name> <name><surname>Gromiha</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-319-09330-7_27</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kumar</surname> <given-names>M.</given-names></name> <name><surname>Ellis</surname> <given-names>C. T.</given-names></name> <name><surname>Lu</surname> <given-names>Q.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Capot&#x00E3;</surname> <given-names>M.</given-names></name> <name><surname>Willke</surname> <given-names>T. L.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>BrainIAK tutorials: User-friendly learning materials for advanced fMRI analysis.</article-title> <source><italic>PLoS Comput. Biol.</italic></source> <volume>16</volume>:<issue>e1007549</issue>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1007549</pub-id> <pub-id pub-id-type="pmid">31940340</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeCun</surname> <given-names>Y.</given-names></name> <name><surname>Bottou</surname> <given-names>L.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name> <name><surname>Haffner</surname> <given-names>P.</given-names></name></person-group> (<year>1998</year>). <article-title>Gradient-based learning applied to document recognition.</article-title> <source><italic>Proc. IEEE</italic></source> <volume>86</volume> <fpage>2278</fpage>&#x2013;<lpage>2324</lpage>. <pub-id pub-id-type="doi">10.1109/5.726791</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ledoit</surname> <given-names>O.</given-names></name> <name><surname>Wolf</surname> <given-names>M.</given-names></name></person-group> (<year>2004</year>). <article-title>A well-conditioned estimator for large-dimensional covariance matrices.</article-title> <source><italic>J. Mult. Anal.</italic></source> <volume>88</volume> <fpage>365</fpage>&#x2013;<lpage>411</lpage>. <pub-id pub-id-type="doi">10.1016/S0047-259X(03)00096-4</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Parikh</surname> <given-names>N. A.</given-names></name> <name><surname>He</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>A novel transfer learning approach to enhance deep neural network classification of brain functional connectomes.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>12</volume>:<issue>491</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2018.00491</pub-id> <pub-id pub-id-type="pmid">30087587</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Loey</surname> <given-names>M.</given-names></name> <name><surname>El-Bakry</surname> <given-names>H.</given-names></name> <name><surname>El-Sawy</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>CNN for handwritten arabic digits recognition based on LeNet-5</article-title>,&#x201D; in <source><italic>Proceedings of the international conference on advanced intelligent systems and informatics</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Hassanien</surname> <given-names>A.</given-names></name> <name><surname>Shaalan</surname> <given-names>K.</given-names></name> <name><surname>Gaber</surname> <given-names>T.</given-names></name> <name><surname>Azar</surname> <given-names>A.</given-names></name> <name><surname>Tolba</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>1</fpage>.</citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meng</surname> <given-names>L.</given-names></name> <name><surname>Xiang</surname> <given-names>J.</given-names></name></person-group> (<year>2018</year>). <article-title>Brain network analysis and classification based on convolutional neural network.</article-title> <source><italic>Front. Comput. Neurosci.</italic></source> <volume>12</volume>:<issue>95</issue>. <pub-id pub-id-type="doi">10.3389/fncom.2018.00095</pub-id> <pub-id pub-id-type="pmid">30618690</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meszl&#x00E9;nyi</surname> <given-names>R. J.</given-names></name> <name><surname>Buza</surname> <given-names>K.</given-names></name> <name><surname>Vidny&#x00E1;nszky</surname> <given-names>Z.</given-names></name></person-group> (<year>2017</year>). <article-title>Resting state fMRI functional connectivity-based classification using a convolutional neural network architecture.</article-title> <source><italic>Front. Neuroinform.</italic></source> <volume>11</volume>:<issue>16</issue>. <pub-id pub-id-type="doi">10.3389/fninf.2017.00061</pub-id> <pub-id pub-id-type="pmid">29089883</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nickel</surname> <given-names>R. E.</given-names></name> <name><surname>Huang-Storms</surname> <given-names>L.</given-names></name></person-group> (<year>2017</year>). <article-title>Early Identification of Young Children with Autism Spectrum Disorder.</article-title> <source><italic>Indian J. Pediatr.</italic></source> <volume>84</volume> <fpage>53</fpage>&#x2013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1007/s12098-015-1894-0</pub-id> <pub-id pub-id-type="pmid">26411730</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nie</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name> <name><surname>Adeli</surname> <given-names>E.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Shen</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics)</italic></source>, (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-319-46723-8_25</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Niepert</surname> <given-names>M.</given-names></name> <name><surname>Ahmad</surname> <given-names>M.</given-names></name> <name><surname>Kutzkov</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Learning convolutional neural networks for graphs. 33rd international conference on machine learning.</article-title> <source><italic>ICML</italic></source> <volume>2016</volume>:<issue>4</issue>.</citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parisot</surname> <given-names>S.</given-names></name> <name><surname>Ktena</surname> <given-names>S. I.</given-names></name> <name><surname>Ferrante</surname> <given-names>E.</given-names></name> <name><surname>Lee</surname> <given-names>M.</given-names></name> <name><surname>Guerrero</surname> <given-names>R.</given-names></name> <name><surname>Glocker</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Disease prediction using graph convolutional networks: Application to autism spectrum disorder and Alzheimer&#x2019;s disease.</article-title> <source><italic>Med. Image Anal.</italic></source> <volume>48</volume> <fpage>117</fpage>&#x2013;<lpage>130</lpage>. <pub-id pub-id-type="doi">10.1016/j.media.2018.06.001</pub-id> <pub-id pub-id-type="pmid">29890408</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Patel</surname> <given-names>P.</given-names></name> <name><surname>Aggarwal</surname> <given-names>P.</given-names></name> <name><surname>Gupta</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). <article-title>Classification of schizophrenia versus normal subjects using deep learning.</article-title> <source><italic>ACM Int. Confer. Proc. Ser.</italic></source> <volume>212</volume> <fpage>186</fpage>&#x2013;<lpage>195</lpage>. <pub-id pub-id-type="doi">10.1145/3009977.3010050</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pedregosa</surname> <given-names>F.</given-names></name> <name><surname>Varoquaux</surname> <given-names>G.</given-names></name> <name><surname>Gramfort</surname> <given-names>A.</given-names></name> <name><surname>Michel</surname> <given-names>V.</given-names></name> <name><surname>Thirion</surname> <given-names>B.</given-names></name> <name><surname>Grisel</surname> <given-names>O.</given-names></name><etal/></person-group> (<year>2011</year>). <article-title>Scikit-learn: Machine learning in Python.</article-title> <source><italic>J. Mach. Learn. Res.</italic></source> <volume>12</volume> <fpage>2825</fpage>&#x2013;<lpage>2830</lpage>.</citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pennec</surname> <given-names>X.</given-names></name> <name><surname>Fillard</surname> <given-names>P.</given-names></name> <name><surname>Ayache</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>A riemannian framework for tensor computing.</article-title> <source><italic>Int. J. of Comp. Vis.</italic></source> <volume>66</volume> <fpage>41</fpage>&#x2013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-005-3222-z</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>F.</given-names></name> <name><surname>Mitchell</surname> <given-names>T.</given-names></name> <name><surname>Botvinick</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Machine learning classifiers and fMRI: A tutorial overview.</article-title> <source><italic>Neuroimage</italic></source> <volume>45(1 Suppl)</volume> <fpage>S199</fpage>&#x2013;<lpage>S209</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.11.007</pub-id> <pub-id pub-id-type="pmid">19070668</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perozzi</surname> <given-names>B.</given-names></name> <name><surname>Al-Rfou</surname> <given-names>R.</given-names></name> <name><surname>Skiena</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). &#x201C;<article-title>Deepwalk: Online learning of social representations</article-title>,&#x201D; in <source><italic>Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining</italic></source>, (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>701</fpage>&#x2013;<lpage>710</lpage>.</citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ribeiro</surname> <given-names>L. F.</given-names></name> <name><surname>Saverese</surname> <given-names>P. H.</given-names></name> <name><surname>Figueiredo</surname> <given-names>D. R.</given-names></name></person-group> (<year>2017</year>). &#x201C;<article-title>Struc2vec: Learning node representations from structural identity</article-title>,&#x201D; in <source><italic>Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining</italic></source>, <publisher-loc>Halifax, NS</publisher-loc>, <fpage>385</fpage>&#x2013;<lpage>394</lpage>.</citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosenthal</surname> <given-names>G.</given-names></name> <name><surname>V&#x00E1;&#x0161;a</surname> <given-names>F.</given-names></name> <name><surname>Griffa</surname> <given-names>A.</given-names></name> <name><surname>Hagmann</surname> <given-names>P.</given-names></name> <name><surname>Amico</surname> <given-names>E.</given-names></name> <name><surname>Go&#x00F1;i</surname> <given-names>J.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Mapping higher-order relations between brain structure and function with embedded vector representations of connectomes.</article-title> <source><italic>Nature Commun.</italic></source> <volume>9</volume>:<issue>2178</issue>. <pub-id pub-id-type="doi">10.1038/s41467-018-04614-w</pub-id> <pub-id pub-id-type="pmid">29872218</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saad</surname> <given-names>Z. S.</given-names></name> <name><surname>Glen</surname> <given-names>D. R.</given-names></name> <name><surname>Chen</surname> <given-names>G.</given-names></name> <name><surname>Beauchamp</surname> <given-names>M. S.</given-names></name> <name><surname>Desai</surname> <given-names>R.</given-names></name> <name><surname>Cox</surname> <given-names>R. W.</given-names></name></person-group> (<year>2009</year>). <article-title>A new method for improving functional-to-structural MRI alignment using local Pearson correlation.</article-title> <source><italic>Neuroimage</italic></source> <volume>44</volume> <fpage>839</fpage>&#x2013;<lpage>848</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2008.09.037</pub-id> <pub-id pub-id-type="pmid">18976717</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salimi-Khorshidi</surname> <given-names>G.</given-names></name> <name><surname>Douaud</surname> <given-names>G.</given-names></name> <name><surname>Beckmann</surname> <given-names>C. F.</given-names></name> <name><surname>Glasser</surname> <given-names>M. F.</given-names></name> <name><surname>Griffanti</surname> <given-names>L.</given-names></name> <name><surname>Smith</surname> <given-names>S. M.</given-names></name></person-group> (<year>2014</year>). <article-title>Automatic denoising of functional MRI data: Combining independent component analysis and hierarchical fusion of classifiers.</article-title> <source><italic>Neuroimage</italic></source> <volume>90</volume> <fpage>449</fpage>&#x2013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.11.046</pub-id> <pub-id pub-id-type="pmid">24389422</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sarraf</surname> <given-names>S.</given-names></name> <name><surname>DeSouza</surname> <given-names>D.</given-names></name> <name><surname>Anderson</surname> <given-names>J.</given-names></name> <name><surname>Tofighi</surname> <given-names>G.</given-names></name></person-group> (<year>2016</year>). <article-title>DeepAD: Alzheimer&#x2019;s disease classification via deep convolutional neural networks using MRI and fMRI.</article-title> <source><italic>BioRxiv</italic></source> [<comment>Preprint</comment>]. <pub-id pub-id-type="doi">10.1101/070441</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sharif</surname> <given-names>H.</given-names></name> <name><surname>Khan</surname> <given-names>R. A.</given-names></name></person-group> (<year>2021</year>). <article-title>A novel machine learning based framework for detection of autism spectrum disorder (ASD).</article-title> <source><italic>Appl. Artific. Intell.</italic></source> <volume>33</volume> <fpage>732</fpage>&#x2013;<lpage>746</lpage>. <pub-id pub-id-type="doi">10.1080/08839514.2021.2004655</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sherkatghanad</surname> <given-names>Z.</given-names></name> <name><surname>Akhondzadeh</surname> <given-names>M.</given-names></name> <name><surname>Salari</surname> <given-names>S.</given-names></name> <name><surname>Zomorodi-Moghadam</surname> <given-names>M.</given-names></name> <name><surname>Abdar</surname> <given-names>M.</given-names></name> <name><surname>Acharya</surname> <given-names>U. R.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Automated detection of autism spectrum disorder using a convolutional neural network.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>13</volume>:<issue>1325</issue>. <pub-id pub-id-type="doi">10.3389/fnins.2019.01325</pub-id> <pub-id pub-id-type="pmid">32009868</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shervashidze</surname> <given-names>N.</given-names></name> <name><surname>Schweitzer</surname> <given-names>P.</given-names></name> <name><surname>van Leeuwen</surname> <given-names>E. J.</given-names></name> <name><surname>Mehlhorn</surname> <given-names>K.</given-names></name> <name><surname>Borgwardt</surname> <given-names>K. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Weisfeiler-lehman graph kernels.</article-title> <source><italic>J. Mach. Learn. Res.</italic></source> <volume>12</volume> <fpage>2539</fpage>&#x2013;<lpage>2561</lpage>.</citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>S. M.</given-names></name> <name><surname>Fox</surname> <given-names>P. T.</given-names></name> <name><surname>Miller</surname> <given-names>K. L.</given-names></name> <name><surname>Glahn</surname> <given-names>D. C.</given-names></name> <name><surname>Fox</surname> <given-names>P. M.</given-names></name> <name><surname>Mackay</surname> <given-names>C. E.</given-names></name><etal/></person-group> (<year>2009</year>). <article-title>Correspondence of the brain&#x2019;s functional architecture during activation and rest.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S. A.</italic></source> <volume>106</volume> <fpage>13040</fpage>&#x2013;<lpage>13045</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.0905267106</pub-id> <pub-id pub-id-type="pmid">19620724</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suk</surname> <given-names>H.</given-names></name> <name><surname>Wee</surname> <given-names>C. Y.</given-names></name> <name><surname>Lee</surname> <given-names>S. W.</given-names></name> <name><surname>Shen</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>State-space model with deep learning for functional dynamics estimation in resting-state fMRI.</article-title> <source><italic>Neuroimage</italic></source> <volume>129</volume> <fpage>292</fpage>&#x2013;<lpage>307</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2016.01.005</pub-id> <pub-id pub-id-type="pmid">26774612</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tixier</surname> <given-names>A. J.-P.</given-names></name> <name><surname>Nikolentzos</surname> <given-names>G.</given-names></name> <name><surname>Meladianos</surname> <given-names>P.</given-names></name> <name><surname>Vazirgiannis</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Graph classification with 2d convolutional neural networks</article-title>,&#x201D; in <source><italic>International conference on artificial neural networks</italic></source>, (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>578</fpage>&#x2013;<lpage>593</lpage>.</citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tohka</surname> <given-names>J.</given-names></name> <name><surname>Foerde</surname> <given-names>K.</given-names></name> <name><surname>Aron</surname> <given-names>A. R.</given-names></name> <name><surname>Tom</surname> <given-names>S. M.</given-names></name> <name><surname>Toga</surname> <given-names>A. W.</given-names></name> <name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Automatic independent component labeling for artifact removal in fMRI.</article-title> <source><italic>Neuroimage</italic></source> <volume>39</volume> <fpage>1227</fpage>&#x2013;<lpage>1245</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2007.10.013</pub-id> <pub-id pub-id-type="pmid">18042495</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Varoquaux</surname> <given-names>G.</given-names></name> <name><surname>Craddock</surname> <given-names>R. C.</given-names></name></person-group> (<year>2013</year>). <article-title>Learning and comparing functional connectomes across subjects.</article-title> <source><italic>Neuroimage</italic></source> <volume>80</volume> <fpage>405</fpage>&#x2013;<lpage>415</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2013.04.007</pub-id> <pub-id pub-id-type="pmid">23583357</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Varoquaux</surname> <given-names>G.</given-names></name> <name><surname>Baronnet</surname> <given-names>F.</given-names></name> <name><surname>Kleinschmidt</surname> <given-names>A.</given-names></name> <name><surname>Fillard</surname> <given-names>P.</given-names></name> <name><surname>Thirion</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). &#x201C;<article-title>Detection of brain functional-connectivity difference in post-stroke patients using group-level covariance modeling</article-title>,&#x201D; in <source><italic>Lecture notes in computer science (Including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics)</italic></source>, (<publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer Science+Business Media</publisher-name>). <pub-id pub-id-type="doi">10.1007/978-3-642-15705-9_25</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vigneshwaran</surname> <given-names>S.</given-names></name> <name><surname>Mahanand</surname> <given-names>B. S.</given-names></name> <name><surname>Suresh</surname> <given-names>S.</given-names></name> <name><surname>Sundararajan</surname> <given-names>N.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>Using regional homogeneity from functional MRI for diagnosis of ASD among males</article-title>,&#x201D; in <source><italic>Proceedings of the 2015 international joint conference on neural networks (IJCNN)</italic></source>, (<publisher-loc>Killarney</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>8</lpage>.</citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Cui</surname> <given-names>P.</given-names></name> <name><surname>Zhu</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Structural deep network embedding</article-title>,&#x201D; in <source><italic>Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining</italic></source>, (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>ACM Press</publisher-name>). <pub-id pub-id-type="doi">10.1145/2939672.2939753</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>G.</given-names></name> <name><surname>Gong</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Facial expression recognition based on improved LeNet-5 CNN</article-title>,&#x201D; in <source><italic>Proceedings of the 31st chinese control and decision conference, CCDC 2019</italic></source>, (<publisher-loc>Nanchang</publisher-loc>). <pub-id pub-id-type="doi">10.1109/CCDC.2019.8832535</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wilson</surname> <given-names>J. D.</given-names></name> <name><surname>Baybay</surname> <given-names>M.</given-names></name> <name><surname>Sankar</surname> <given-names>R.</given-names></name> <name><surname>Stillman</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <article-title>Fast embedding of multilayer networks: An algorithm and application to group fMRI.</article-title> <source><italic>ArXiv</italic></source> [<comment>Preprint</comment>]. <fpage>415</fpage>. <pub-id pub-id-type="doi">10.1186/s12868-016-0283-6</pub-id> <pub-id pub-id-type="pmid">27534393</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xing</surname> <given-names>X.</given-names></name> <name><surname>Ji</surname> <given-names>J.</given-names></name> <name><surname>Yao</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Convolutional neural network with element-wise filters to extract hierarchical topological features for brain networks</article-title>,&#x201D; in <source><italic>Proceeding of the 2018 IEEE international conference on bioinformatics and biomedicine, BIBM 2018</italic></source>, (<publisher-loc>Madrid</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/BIBM.2018.8621472</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>X.</given-names></name> <name><surname>Islam</surname> <given-names>M. S.</given-names></name> <name><surname>Khaled</surname> <given-names>A. A.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Functional connectivity magnetic resonance imaging classification of autism spectrum disorder using the multisite ABIDE dataset</article-title>,&#x201D; in <source><italic>Proceedings of the 2019 IEEE EMBS International conference on biomedical and health informatics (BHI)</italic></source>, (<publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x2013;<lpage>4</lpage>.</citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Z.</given-names></name> <name><surname>Ding</surname> <given-names>M.</given-names></name> <name><surname>Zhou</surname> <given-names>C.</given-names></name> <name><surname>Yang</surname> <given-names>H.</given-names></name> <name><surname>Zhou</surname> <given-names>J.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). &#x201C;<article-title>Understanding negative sampling in graph representation learning</article-title>,&#x201D; in <source><italic>Proceedings of the ACM SIGKDD international conference on knowledge discovery and data mining.</italic></source> <pub-id pub-id-type="doi">10.1145/3394486.3403218</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zafar</surname> <given-names>R.</given-names></name> <name><surname>Kamel</surname> <given-names>N.</given-names></name> <name><surname>Naufal</surname> <given-names>M.</given-names></name> <name><surname>Malik</surname> <given-names>A. S.</given-names></name> <name><surname>Dass</surname> <given-names>S. C.</given-names></name> <name><surname>Ahmad</surname> <given-names>R. F.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.</article-title> <source><italic>J. Integr. Neurosci.</italic></source> <volume>16</volume> <fpage>275</fpage>&#x2013;<lpage>289</lpage>. <pub-id pub-id-type="doi">10.3233/JIN-170016</pub-id> <pub-id pub-id-type="pmid">28891512</pub-id></citation></ref>
</ref-list>
</back>
</article>