<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Archiving and Interchange DTD v2.3 20070202//EN" "archivearticle.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="methods-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neuroinform.</journal-id>
<journal-title>Frontiers in Neuroinformatics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neuroinform.</abbrev-journal-title>
<issn pub-type="epub">1662-5196</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fninf.2020.563669</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Methods</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>NeuroRA: A Python Toolbox of Representational Analysis From Multi-Modal Neural Data</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Lu</surname> <given-names>Zitong</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/984036/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Ku</surname> <given-names>Yixuan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/243154/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University</institution>, <addr-line>Guangzhou</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Peng Cheng Laboratory</institution>, <addr-line>Shenzhen</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Shanghai Key Laboratory of Brain Functional Genomics, Shanghai Changning-East China Normal University (ECNU) Mental Health Center, School of Psychology and Cognitive Science, East China Normal University</institution>, <addr-line>Shanghai</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Mike Hawrylycz, Allen Institute for Brain Science, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Dan Zhang, Tsinghua University, China; Michael Denker, J&#x000FC;lich Research Centre, Germany; Cristiano K&#x000F6;hler, J&#x000FC;lich Research Centre, Germany</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Yixuan Ku <email>kuyixuan&#x00040;mail.sysu.edu.cn</email>; Researcher ID: D-4063-2018 <ext-link ext-link-type="uri" xlink:href="https://orcid.org/0000-0003-2804-5123">orcid.org/0000-0003-2804-5123</ext-link></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>23</day>
<month>12</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>14</volume>
<elocation-id>563669</elocation-id>
<history>
<date date-type="received">
<day>19</day>
<month>05</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>12</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2020 Lu and Ku.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Lu and Ku</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license></permissions>
<abstract><p>In studies of cognitive neuroscience, multivariate pattern analysis (MVPA) is widely used as it offers richer information than traditional univariate analysis. Representational similarity analysis (RSA), as one method of MVPA, has become an effective decoding method based on neural data by calculating the similarity between different representations in the brain under different conditions. Moreover, RSA is suitable for researchers to compare data from different modalities and even bridge data from different species. However, previous toolboxes have been made to fit specific datasets. Here, we develop NeuroRA, a novel and easy-to-use toolbox for representational analysis. Our toolbox aims at conducting cross-modal data analysis from multi-modal neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neruroelectrophysiological data), behavioral data, and computer-simulated data. Compared with previous software packages, our toolbox is more comprehensive and powerful. Using NeuroRA, users can not only calculate the representational dissimilarity matrix (RDM), which reflects the representational similarity among different task conditions and conduct a representational analysis among different RDMs to achieve a cross-modal comparison. Besides, users can calculate neural pattern similarity (NPS), spatiotemporal pattern similarity (STPS), and inter-subject correlation (ISC) with this toolbox. NeuroRA also provides users with functions performing statistical analysis, storage, and visualization of results. We introduce the structure, modules, features, and algorithms of NeuroRA in this paper, as well as examples applying the toolbox in published datasets.</p></abstract>
<kwd-group>
<kwd>representational similarity analysis (RSA)</kwd>
<kwd>multivariate pattern analysis</kwd>
<kwd>multi-modal</kwd>
<kwd>Python</kwd>
<kwd>correlation analysis</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="6"/>
<equation-count count="3"/>
<ref-count count="57"/>
<page-count count="15"/>
<word-count count="9319"/>
</counts>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Significance Statement</title>
<p>For the last two decades, neuroscience research envisions the prevalence of multivariate pattern analysis, in which representation similarity analysis (RSA) is one of the core methods. As representation bridges computation and implementation in David Marr&#x00027;s model, RSA bridges data from different modalities, including behavior, EEG, MEG, fMRI, et al. and even different species. Our toolbox NeuroRA is developed based on Python and can be applied for multi-modal neural data, as well as behavioral and simulated data. By calculating the representational dissimilarity matrix, neural pattern similarity, spatiotemporal pattern similarity, and inter-subject correlation with NeuroRA, we can assess representation similarities across datasets, subjects, space, and time. Statistical results can also be assessed by user-defined threshold and output to a data format that could be opened in other toolboxes.</p></sec>
<sec sec-type="intro" id="s2">
<title>Introduction</title>
<p>In recent years, research on brain science based on neural data has shifted from univariate analysis toward multivariate pattern analysis (MVPA) (Norman et al., <xref ref-type="bibr" rid="B39">2006</xref>). In contrast to the former, the latter accounts for the population coding for neurons. The decoding of neural activity can help scientists better understand the encoding process of neurons. As in David Marr&#x00027;s model, representation bridges the gap between a computation goal and implementation machinery (Marr, <xref ref-type="bibr" rid="B36">1982</xref>). Representational similarity analysis (RSA) (Kriegeskorte et al., <xref ref-type="bibr" rid="B31">2008</xref>) is an effective MVPA method that can successfully describe the relationship between representations of different data modalities, bridging gaps between humans, and animals. Therefore, RSA has been rapidly applied in investigating various cognitive functions, including perception (Evans and Davis, <xref ref-type="bibr" rid="B13">2015</xref>; Henriksson et al., <xref ref-type="bibr" rid="B23">2019</xref>), memory (Xue et al., <xref ref-type="bibr" rid="B54">2010</xref>), language (Chen et al., <xref ref-type="bibr" rid="B6">2016</xref>), and decision-making (Yan et al., <xref ref-type="bibr" rid="B56">2016</xref>).</p>
<p>With technological development in brain science, various neural recording methods have emerged rapidly. Noninvasive methods that investigate brain activity such as electroencephalography (EEG), magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and functional near-infrared spectroscopy (fNIRS) have been widely used for basic research. Meanwhile, invasive techniques such as electrocorticography (ECoG), stereo-electro-encephalography (sEEG), and some other neuroelectrophysiological methods have been applied to humans, non-human primates, and other animal species. The interpretation of results across different recording modalities becomes difficult. The RSA method, however, uses a representation dissimilarity matrix (RDM) to bridge data from different modalities. For example, studies have attempted to combine fMRI results with electrophysiological results (Kriegeskorte et al., <xref ref-type="bibr" rid="B31">2008</xref>; Muukkonen et al., <xref ref-type="bibr" rid="B37">2020</xref>), MEG results with electrophysiological results (Cichy et al., <xref ref-type="bibr" rid="B8">2014</xref>), or behavioral results and fMRI results (Wang et al., <xref ref-type="bibr" rid="B52">2018</xref>). Furthermore, with the rapid development of artificial intelligence (AI) (Jordan and Mitchell, <xref ref-type="bibr" rid="B26">2015</xref>; Kriegeskorte and Golan, <xref ref-type="bibr" rid="B30">2019</xref>), RSA can be used to compare representations in artificial neural networks (ANN) with brain activities (Khaligh-Razavi and Kriegeskorte, <xref ref-type="bibr" rid="B27">2014</xref>; Yamins et al., <xref ref-type="bibr" rid="B55">2014</xref>; G&#x000FC;&#x000E7;l and van Gerven, <xref ref-type="bibr" rid="B17">2015</xref>; Eickenberg et al., <xref ref-type="bibr" rid="B11">2017</xref>; Bonner and Epstein, <xref ref-type="bibr" rid="B3">2018</xref>; Greene and Hansen, <xref ref-type="bibr" rid="B16">2018</xref>; Kuzovkin et al., <xref ref-type="bibr" rid="B32">2018</xref>; Urgen et al., <xref ref-type="bibr" rid="B49">2019</xref>). In summary, RSA is a useful tool to combine the results of behavior and multi-modal neural data, leading to a better understanding of the brain. Even further, it can help us establish a clearer link between the brain and artificial intelligence.</p>
<p>Other existing tools for RSA include a module in PyMVPA (Hanke et al., <xref ref-type="bibr" rid="B19">2009</xref>), a toolbox for RSA by Kriegeskorte (Nili et al., <xref ref-type="bibr" rid="B38">2014</xref>), and an example in MNE-Python (Gramfort et al., <xref ref-type="bibr" rid="B15">2013</xref>). However, they all have some shortcomings. MNE can only perform RSA for MEG and EEG data in one example. PyMVPA supports only basic functions, such as calculating the correlation coefficient and data conversion. Kriegeskorte&#x00027;s toolbox attached to their paper is designed mainly based on fMRI data, and users need to be proficient in MATLAB (Nili et al., <xref ref-type="bibr" rid="B38">2014</xref>), which makes it difficult for users to apply to other datasets for EEG, MEG, or other types of data sources. We set to develop a comprehensive and universal toolbox for RSA, and Python was chosen as a suitable programming language. Python is a rapidly rising programming language having significant advantages for scientific computing (Sanner, <xref ref-type="bibr" rid="B46">1999</xref>; Koepke, <xref ref-type="bibr" rid="B28">2011</xref>). Because of its strong expansibility, it is more convenient to use Python for implementing a toolbox for representational analysis. NumPy (van der Walt et al., <xref ref-type="bibr" rid="B50">2011</xref>), Scikit-learn (Pedregosa et al., <xref ref-type="bibr" rid="B40">2011</xref>), and other extensions can execute and simplify various basic computing functions. Thus, some researchers select Python to develop toolkits in psychology and neuroscience, such as PsychoPy (Peirce, <xref ref-type="bibr" rid="B41">2007</xref>) for designing psychological experiment programs, MNE-Python for EEG/MEG data analysis, and PyMVPA for utilizing MVPA in data from different modalities.</p>
<p>We have developed a novel and easy-to-use Python toolbox, NeuroRA (neural representational analysis), for comprehensive representation analysis. NeuroRA aims to use powerful computational resources with Python and conduct cross-modal data analyses for various types of neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neuroelectrophysiological data), as well as behavioral data and computer simulation data. In addition to the traditional functions of RSA, NeuroRA also includes specialized parts of representational analysis described in papers published on different research groups. These include neural pattern similarity (NPS) (Haxby, <xref ref-type="bibr" rid="B21">2001</xref>; Cavanagh et al., <xref ref-type="bibr" rid="B5">2018</xref>), spatiotemporal pattern similarity (STPS) (Xue et al., <xref ref-type="bibr" rid="B54">2010</xref>; Lu et al., <xref ref-type="bibr" rid="B34">2015</xref>), and inter-subject correlation (ISC) (Hasson et al., <xref ref-type="bibr" rid="B20">2004</xref>). In the following sections, we detail the structure and function of NeuroRA and further apply it to several open datasets to guide users to use NeuroRA.</p></sec>
<sec id="s3">
<title>Overview of Neurora</title>
<p>NeuroRA is an easy-to-use Python toolbox of representational analysis from multi-modal neural data. Users can apply NeuroRA to track the representation and compare representational similarity among different task conditions and modalities.</p>
<p>The structure and features of NeuroRA are illustrated in <xref ref-type="fig" rid="F1">Figure 1</xref>. It can analyze all types of neural (including EEG, MEG, fNIRS, fMRI, and other sources of neuroelectrophysiological data) and behavioral data. By utilizing the powerful computational toolbox in Python, NeuroRA gives users the ability to mine neural data thoroughly and efficiently.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Overview of NeuroRA. NeuroRA is a Python-based toolbox and requires some extension packages, including NumPy, SciPy, Matplotlib, Nilearn, and MNE-Python. It contains several main parts: calculating neural pattern similarity (NPS), spatiotemporal pattern similarity (STPS), inter-subject correlation (ISC), and representation dissimilarity matrix (RDM), comparing representations among different modalities using RDMs, statistical analysis, saving results as a NIfTI file for fMRI data, and plotting the results. Each calculation part corresponds to one to two modules. The blue arrows indicate the feasible data flow.</p></caption>
<graphic xlink:href="fninf-14-563669-g0001.tif"/>
</fig>
<p>NeuroRA provides abundant functions. First, NPS module reflects the correlation of brain activities induced under two different conditions. Second, STPS module reflects the representational similarity across different space and time points. Third, ISC module reflects the similarity of brain activities among multiple subjects under the same condition. Fourth, RDM module reflects the representation similarity between different conditions or stimuli with neural data from a given modality. Fifth, NeuroRA performs a correlation analysis between RDMs from different modalities to compare representations across modalities. This procedure can be applied according to different parameters; for example, the calculation can be applied for each subject, for each channel, for each time-point, or a combination of all of them.</p>
<p>In addition to calculating the above values, NeuroRA provides a statistical module to perform statistical analysis based on those values and a visualization module to plot the results, such as RDMs, representational similarities over time, and RSA-results for fMRI. Also, NeuroRA provides a unique approach to save the result of representational analysis back to the widely-used fMRI format NIfTI, generating a file obtained with user-defined output-threshold.</p>
<p>The required packages for NeuroRA include NumPy, SciPy (Virtanen et al., <xref ref-type="bibr" rid="B51">2020</xref>), Matplotlib (Hunter, <xref ref-type="bibr" rid="B24">2007</xref>), Nibabel (Brett et al., <xref ref-type="bibr" rid="B4">2016</xref>), Nilearn, and MNE-Python, which are checked and automatically downloaded by installing NeuroRA. NumPy assists with matrix-based computation. SciPy helps with fundamental statistical analysis. Matplotlib and Nilearn are employed for the plotting functions. NiBabel is used to read and generate NIfTI files. MNE-Python is applied to load example MEG data in the demo. Users can download NeuroRA through only one line of command: <italic>pip install neurora</italic>. The website for our toolbox is <ext-link ext-link-type="uri" xlink:href="https://neurora.github.io/NeuroRA/">https://neurora.github.io/NeuroRA/</ext-link>, and the website for online API documentation is <ext-link ext-link-type="uri" xlink:href="https://neurora.github.io/documentation/">https://neurora.github.io/documentation/</ext-link>. Additionally, GitHub URL for its source code is <ext-link ext-link-type="uri" xlink:href="https://github.com/neurora/NeuroRA">https://github.com/neurora/NeuroRA</ext-link>.</p></sec>
<sec id="s4">
<title>Data Structures in Neurora</title>
<p>The calculations in NeuroRA are all based on multidimensional matrices, including deformation, transposition, decomposition, standardization, addition, and subtraction. The data type in NeuroRA is <italic>ndarray</italic>, an N-dimensional array class of NumPy. Therefore, users first convert their neural data into a matrix (<italic>ndarray</italic> type) as the input of NeuroRA, with information on the different dimensions of the matrix, such as the number of subjects, number of conditions, number of channels, and size of the image (see instructions in the software for details). Here, we give users some feasible methods for data conversion for different kinds of neural data in <xref ref-type="supplementary-material" rid="SM4">Supplemental Instructions for Data Conversion</xref>. The outputs of the functions in NeuroRA are square matrices with the same dimensions as the input matrix. The input and output data structures of key functions for calculation and statistical analysis in NeuroRA are shown in <xref ref-type="supplementary-material" rid="SM1">Supplementary Tables 1</xref>, <xref ref-type="supplementary-material" rid="SM2">2</xref>.</p></sec>
<sec id="s5">
<title>Neurora&#x00027;s Modules and Features</title>
<p>NeuroRA provides various functions to perform the representational analysis. Usually, data must be processed in multi-step ways, and this toolkit highly integrates these intermediate processes, making it easy to implement. In NeuroRA, only a simple function is required to complete the analysis. Users can obtain the required results after a necessary conversion of the data format.</p>
<p>Meanwhile, we attempt to add some adjustable parameters to meet the calculation requirements for different experiments and different modalities of data. Users can flexibly change the input parameters in the function to match their data format and computing goals.</p>
<p>NeuroRA includes the following core modules, and more modules could be added in the future or as requested.</p>
<p><italic>nps_cal</italic>: A module to calculate the neural pattern similarity based on neural data.</p>
<p><italic>stps_cal</italic>: A module to calculate the spatiotemporal pattern similarity based on neural data.</p>
<p><italic>isc_cal</italic>: A module to calculate the inter-subject correlation based on neural data.</p>
<p><italic>rdm_cal</italic>: A module to calculate RDMs based on multi-modal neural data.</p>
<p><italic>rdm_corr</italic>: A module to calculate the correlation coefficient between two RDMs, based on different algorithms, including Pearson correlation, Spearman correlation, Kendall&#x00027;s tau correlation, cosine similarity, and Euclidean distance.</p>
<p><italic>corr_cal_by_rdm</italic>: A module to calculate the representational similarities among the RDMs under different modes.</p>
<p><italic>corr_cal</italic>: A module to conduct a one-step RSA between two different modes of data.</p>
<p><italic>nii_save</italic>: A module to save the representational analysis results in a .nii file for fMRI.</p>
<p><italic>stats_cal</italic>: A module to calculate the statistical results.</p>
<p><italic>rsa_plot</italic>: A module to plot the results from the representational analysis. It contains the functions of plotting the RDM, plotting the graphs or hotmaps with results from the representational analysis by time sequence based on EEG or EEG-like (such as MEG) data, plotting the results of fMRI representational analysis (montage images and surface images).</p></sec>
<sec id="s6">
<title>Representational Similarity Analysis Using Neurora</title>
<p>RSA has gradually become a popular method to explore information coding in the brain and computational models. Comparing whole dissimilarities among all task conditions in RDM, RSA becomes an effective approach to track the multidimensional representation among task conditions. On the one hand, researchers can construct hypothesis-based RDM for different conditions, then compare these theoretical models with RDMs from real neural activities to calculate how similar they are (Alfred et al., <xref ref-type="bibr" rid="B1">2018</xref>; Feng et al., <xref ref-type="bibr" rid="B14">2018</xref>; Hall-McMaster et al., <xref ref-type="bibr" rid="B18">2019</xref>; Yokoi and Diedrichsen, <xref ref-type="bibr" rid="B57">2019</xref>; Etzel et al., <xref ref-type="bibr" rid="B12">2020</xref>). As a result, they can infer the information is coded in the brain. On the other hand, researchers can compare differences and similarities among multi-modal data by comparing the distance or correlation among RDMs computed using different data sources (Kriegeskorte et al., <xref ref-type="bibr" rid="B31">2008</xref>; Cichy et al., <xref ref-type="bibr" rid="B8">2014</xref>; Stolier and Freeman, <xref ref-type="bibr" rid="B47">2016</xref>; Muukkonen et al., <xref ref-type="bibr" rid="B37">2020</xref>). This cross-modal calculation has been increasingly used in comparing brain activities and deep neural networks during object processing (Khaligh-Razavi and Kriegeskorte, <xref ref-type="bibr" rid="B27">2014</xref>; Yamins et al., <xref ref-type="bibr" rid="B55">2014</xref>; G&#x000FC;&#x000E7;l and van Gerven, <xref ref-type="bibr" rid="B17">2015</xref>; Eickenberg et al., <xref ref-type="bibr" rid="B11">2017</xref>; Bonner and Epstein, <xref ref-type="bibr" rid="B3">2018</xref>; Greene and Hansen, <xref ref-type="bibr" rid="B16">2018</xref>; Kuzovkin et al., <xref ref-type="bibr" rid="B32">2018</xref>; Urgen et al., <xref ref-type="bibr" rid="B49">2019</xref>).</p>
<sec>
<title>Calculate One RDM or Multiple RDMs</title>
<p>Constructing an RDM is a typical approach for comparing representations in neural data. By extracting data from two different conditions and calculating the correlations between them, we will obtain the similarity between the two representations under the two conditions. Subtract the obtained similarity index from 1 and get the values of the dissimilarity index in RDM (<xref ref-type="fig" rid="F2">Figure 2</xref>). In <xref ref-type="fig" rid="F2">Figure 2</xref>, different grating stimuli were observed to produce different neural responses, and the value in RDM presented the dissimilarity of neural activities between two different stimuli. As shown in the figure, the closer the two grating orientations were, the lower the dissimilarity. In a typical object recognition experiment, humans and monkeys were allowed to watch the same sets of visual stimuli (Kriegeskorte, <xref ref-type="bibr" rid="B29">2008</xref>). Researchers calculated the humans&#x00027; RDM based on fMRI data and the monkeys&#x00027; RDM based on electrophysiological data. The results indicated that the neural patterns in RDMs were similar when humans and monkeys observed stimuli that belonged to the same category (animate or inanimate).</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Schematic diagram for calculating the RDM. Different data can be obtained under different conditions. The value of the point [<italic>i, j</italic>] in RDM is obtained by calculating the dissimilarity (1-correlation coefficient <italic>r</italic>) between the data under condition <italic>i</italic> and that under condition <italic>j</italic>.</p></caption>
<graphic xlink:href="fninf-14-563669-g0002.tif"/>
</fig>
<p>Assuming that the measured data from a certain condition under a total of <italic>n</italic> experimental conditions are denoted as <italic>d</italic><sub>1</sub>, <italic>d</italic><sub>2</sub>, &#x02026;, <italic>d</italic><sub><italic>n</italic></sub>, then the following RDM of <italic>n</italic>&#x000D7;<italic>n</italic> can be calculated by the corresponding function under the rdm_cal module from our toolkit:</p>
<disp-formula id="E1"><mml:math id="M1"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mi>R</mml:mi><mml:mi>D</mml:mi><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mo stretchy="true">(</mml:mo><mml:mrow><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>11</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable><mml:mtext>&#x000A0;</mml:mtext><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>12</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mtd></mml:mtr></mml:mtable><mml:mtext>&#x000A0;</mml:mtext><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:mo>&#x022EF;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022F1;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EF;</mml:mo></mml:mtd></mml:mtr></mml:mtable><mml:mtext>&#x000A0;</mml:mtext><mml:mtable style="text-align:axis;" equalrows="false" columnlines="none none none none none none none none none" equalcolumns="false" class="array"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mo>&#x022EE;</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>n</mml:mi><mml:mi>n</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:mtd></mml:mtr></mml:mtable></mml:mtd></mml:mtr></mml:mtable></mml:mrow><mml:mo stretchy="true">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>where <italic>D</italic><sub><italic>ij</italic></sub> denotes the dissimilarity between the data under condition <italic>i</italic> and that under condition <italic>j</italic>. The dissimilarity (<italic>D</italic><sub><italic>ij</italic></sub>) is calculated as follows:</p>
<disp-formula id="E2"><mml:math id="M2"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:msub><mml:mrow><mml:mi>D</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mi>j</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>-</mml:mo><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mrow><mml:mi>d</mml:mi></mml:mrow><mml:mrow><mml:mi>j</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>When computing the RDM, multiple measures are provided in NeuroRA, including correlation distance (based on Pearson correlation), Euclidean distance, and Mahalanobis distance. All functions in <italic>neurora.rdm_cal</italic> module has a parameter named <italic>method</italic>, which can be set to change the measure you want (default is for computing based on correlation distance). The application of calculating RDMs is not restricted. NeuroRA can perform computations based on multiple modal neural data from behavioral data to brain imaging data (<xref ref-type="fig" rid="F3">Figure 3</xref>).</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>RDM calculations implemented in NeuroRA. NeuroRA is capable of calculating an RDM using data from different modes, including behavior, fMRI, EEG, MEG, fNIRS, and other sources of neuroelectrophysiological data. The red bold lines denote the ability to perform calculations between two modes. The pink arrow denotes the alternative calculation methods of the corresponding mode.</p></caption>
<graphic xlink:href="fninf-14-563669-g0003.tif"/>
</fig>
<p>In certain cases, researchers must calculate RDM separately for each participant, or they may calculate RDM independently for each channel or each time point (Hall-McMaster et al., <xref ref-type="bibr" rid="B18">2019</xref>; Henriksson et al., <xref ref-type="bibr" rid="B23">2019</xref>). We resolve these issues in NeuroRA by providing several input parameters in the functions that allow users to make the corresponding changes to get one RDM or multiple RDMs by different subjects or channels or time-windows or searchlight (for fMRI) or specific ROI (for fMRI) (<xref ref-type="fig" rid="F3">Figure 3</xref>). Users can change the calculation parameters according to their requirements, and consequently, the corresponding output formats change. Detailed instruction of the shape of the input, the parameter settings with calculation implementation, the corresponding shape of the output, and recommended next steps can be seen in <xref ref-type="supplementary-material" rid="SM1">Supplementary Table 1</xref>.</p></sec>
<sec>
<title>Representational Analysis Among Different RDMs</title>
<sec>
<title>Analysis Between Two RDMs</title>
<p>NeuroRA provides a convenient way to calculate cross-modal similarity by computing the similarities between two RDMs from different modalities. We offer several solutions to calculate the similarity (or correlation coefficient), including Pearson correlation, Spearman correlation, Kendall&#x00027;s tau correlation, cosine similarity, and Euclidean distance. Users can freely change parameters to select different computing methods.</p>
<p>For the calculations, we first reshape the square matrices into vectors and then calculate similarities (<xref ref-type="fig" rid="F4">Figure 4</xref>). Previous studies calculated the correlation coefficient between two RDMs using the diagonal values, making the result deceptively high (Ritchie et al., <xref ref-type="bibr" rid="B43">2017</xref>). We avoid this by removing the diagonal values and include only half of the matrix to reduce the duplication, as the upper and lower halves of the RDM are identical (<xref ref-type="fig" rid="F4">Figure 4</xref>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Schematic diagram for calculation between two RDMs. Step 1: Obtain two RDMs from different modal data. Step 2: Extract the points of the upper diagonal (within the gray line). Step 3: Spread them as vectors. Step 4: Calculate the correlation coefficient or conduct a permutation test between two vectors. The former returns the correlation coefficient and the <italic>p</italic>-value, and the latter returns only the <italic>p</italic>-value.</p></caption>
<graphic xlink:href="fninf-14-563669-g0004.tif"/>
</fig>
<p>Furthermore, NeuroRA provides a permutation test to determine whether the two RDMs are related. The permutation test is based on the random shuffling of data and is suitable for data with a small sample size (Efron and Tibshirani, <xref ref-type="bibr" rid="B10">1994</xref>). We first shuffle the values in the two RDMs and re-calculate the similarity matrix between the two RDMs. By repeating this procedure 5000 times (the number of iterations here can be defined by users), we get the final <italic>p</italic>-values from this permutation distribution.</p></sec>
<sec>
<title>Analysis Among Multiple RDMs</title>
<p>NeuroRA can also perform calculations based on multiple RDMs, rather than only two RDMs corresponding to two modalities. Consequently, we can expand it to conditions with multiple RDMs from different modalities. For instance, when you obtain a behavioral RDM from behavioral data and wish to compare it with other modal data, a problem may arise as more than one RDM can be obtained based on other modal data, such as EEG or fMRI. Our toolbox provides &#x0201C;searchlight&#x0201D; computation to perform correlation analysis between RDM from one mode (behavior, or any of neural data) and RDM from other modes (each brain region, time interval, or others) one by one (<xref ref-type="fig" rid="F3">Figure 3</xref>). For example, calculations based on EEG data can obtain one RDM per channel or time interval or both (<xref ref-type="fig" rid="F5">Figure 5</xref>). <xref ref-type="table" rid="T1">Table 1</xref> is a script example for using NeuroRA to calculate the similarities between behavioral RDM and EEG RDMs per channel by time sequence. Another simple example is when users want to see which brain regions are highly correlated with the behavioral performance or a specific coding model, they can get one behavioral or model RDM based on behavioral response time or accuracy, and they may also get many fMRI RDMs from different regions. Users can put these two kinds of RDMs (behavioral or model RDM and fMRI RDMs) into our function, and they will get results showing the regions that are highly correlated with behavioral or model patterns based on thresholds of significance (<italic>p</italic>-value) or correlation values set by users (<bold>Table 3</bold>, more details on fMRI calculation are described in the next section). These convenient functions of ergodic computation cover the vast majority of cross-modal research needs.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Schematic diagram for calculating similarities between RDM from different modes across time and channel for EEG and EEG-like (such as MEG or sEEG) data. NeuroRA calculates the similarities between RDMs for mode A (EEG and EEG-like data) and one RDM for mode B (such as behavior). Such calculation can be performed across each time-window and each channel. Each value in time-channel result-image (bottom right) corresponds to a similarity index (for example, the Pearson correlation) between RDMs from two Modes.</p></caption>
<graphic xlink:href="fninf-14-563669-g0005.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Scripts of representational analysis between behavioral data and EEG data for each channel in NeuroRA.</p></caption>
<table frame="hsides" rules="groups">
<tbody><tr>
<td valign="top" align="left">Scheme 1</td>
<td valign="top" align="center">1</td>
<td valign="top" align="left">from neurora.rdm_cal import bhvRDM, eegRDM</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">2</td>
<td valign="top" align="left">from neurora.corr_cal_by_rdm import rdms_corr</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">3</td>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="center">4</td>
<td valign="top" align="left">&#x00023; calculate the behavioral RDM for each subject</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">5</td>
<td valign="top" align="left">&#x00023; the shape of bhv_data should be [n_conditions, n_subjects, n_trials]</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">6</td>
<td valign="top" align="left">&#x00023; the shape of bhv_rdms will be [n_subjects, n_conditions, n_conditions]</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">7</td>
<td valign="top" align="left">bhv_rdms = bhvRDM(bhv_data, sub_opt=1)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">8</td>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="center">9</td>
<td valign="top" align="left">&#x00023; calculate the eeg RDMs for each channel &#x00026; each subject</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">10</td>
<td valign="top" align="left">&#x00023; the shape of eeg_data should be [n_conditions, n_subjects, n_trials, n_channels, n_times]</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">11</td>
<td valign="top" align="left">&#x00023; the shape of eeg_rdms will be [n_subjects, n_channels, n_conditions, n_conditions]</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">12</td>
<td valign="top" align="left">eeg_rdms = eegRDM(eeg_data, sub_opt=1, chl_opt=1)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">13</td>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="center">14</td>
<td valign="top" align="left">&#x00023; initialize the correlation coefficients</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">15</td>
<td valign="top" align="left">corrs = np.zeros([n_subjects, n_channels, 2], dtype=np.float)</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">16</td>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="center">17</td>
<td valign="top" align="left">&#x00023; calculate the correlation coefficients between behavioral RDM and eeg RDMs</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">18</td>
<td valign="top" align="left">&#x00023; the shape of corrs is [n_subjects, n_channels, 2], 2 represents a <italic>r</italic>-value &#x00026; a <italic>p</italic>-value</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">19</td>
<td valign="top" align="left">for sub in range(n_subjects):</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">20</td>
<td valign="top" align="left">corrs[sub] = rdm_corr(bhv_rdms[sub], eeg_rdms[sub])</td>
</tr>
<tr>
<td valign="top" align="left">Scheme 2</td>
<td valign="top" align="center">21</td>
<td valign="top" align="left">from neurora.corr_cal import bhvANDeeg_corr</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">22</td>
<td/>
</tr>
<tr>
<td/>
<td valign="top" align="center">23</td>
<td valign="top" align="left">&#x00023; calculate the correlation coefficients between behavioral RDM and eeg RDMs</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">24</td>
<td valign="top" align="left">corrs = bhvANDeeg_corr(bhv_data, eeg_data, sub_opt=1, chl_opt=1)</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Users can input data from different modes and obtain the correlation between the results of the two modes. If users want to conduct the calculation for each time-windows, they can set the parameters: time_opt, time_win &#x00026; time_step in function eegRDM and bhvANDeeg_corr</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>To simplify users&#x00027; experience, our toolbox offers a one-step option between different modes (<xref ref-type="table" rid="T1">Table 1</xref> Scheme 2 is a one-step example for calculating a similarity index between behavior and EEG). Users can input data from two modalities, and the toolbox will directly return the final results of representation analysis. It is very convenient and efficient when users do not need to obtain the RDMs in the intermediate stages. Thus, users can use two modules, <italic>corr_cal</italic> and <italic>corr_cal_by_rdm</italic>, to calculate the representational similarity between two different modalities. The former module provides the calculation based on data from two different modalities. The later module provides the calculation based on RDMs after previous computing from two modalities&#x00027; data. In both modules for calculating cross-modal similarity, users can set different parameters to meet the requirements under different conditions (calculate for each channel, etc.). More detailed instructions of the shape of the input, the parameter settings with calculation implementation, the corresponding shape of the output, and recommended next steps about these modules are shown in <xref ref-type="supplementary-material" rid="SM1">Supplementary Table 1</xref>.</p>
</sec></sec>
<sec>
<title>Representational Analysis for fMRI</title>
<p>fMRI is a largely used method in cognitive neuroscience. In the RSA of fMRI data (Johnson et al., <xref ref-type="bibr" rid="B25">2005</xref>; Poldrack, <xref ref-type="bibr" rid="B42">2012</xref>; Rosen and Savoy, <xref ref-type="bibr" rid="B44">2012</xref>; Lawrence et al., <xref ref-type="bibr" rid="B33">2019</xref>), researchers typically wish to calculate RDMs for different brain regions. In NeuroRA, users can conduct representational analysis using ROIs or searchlight across the whole brain (<xref ref-type="fig" rid="F6">Figure 6</xref>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Schematic diagram for representational analysis for fMRI data using NeuroRA. <bold>(A)</bold> The calculating process for ROI-based analysis. For each ROI, users can calculate the RDM based on the voxels in ROI and get the similarity between ROI RDM and the RDM for other modes. <bold>(B)</bold> The calculating process for searchlight-based analysis. For each searchlight step, users define the size and strides of the calculation unit. After computations between the RDMs within the searchlight blob for fMRI and the RDM for other modes (e.g., behavioral data, computer-simulated data), a NIfTI file can be obtained. At the bottom right is a demo of the resulting NIfTI file drawn with NeuroELF (<ext-link ext-link-type="uri" xlink:href="http://neuroelf.net">http://neuroelf.net</ext-link>), and color-coded regions indicate the strength of representation similarity between two modes. The green text on the green indicates which function to use for the corresponding step.</p></caption>
<graphic xlink:href="fninf-14-563669-g0006.tif"/>
</fig>
<sec>
<title>ROI-Based Computation</title>
<p>For ROI-based computation, users are required to input both fMRI data and a 3-D mask matrix whose size should be consistent with the size of the fMRI image corresponding to fMRI data. The valid voxels which belong to ROI are extracted, and different activities under different conditions of these voxels are spread out as vectors. Then the ROI-based RDM can be calculated by computing the dissimilarities among these vectors. Finally, we can calculate the similarity between this ROI-based RDM and the RDM for another modality. Steps for ROI-based computation with corresponding functions in NeuroRA are shown in <xref ref-type="fig" rid="F6">Figure 6A</xref>.</p></sec>
<sec>
<title>Searchlight-Based Computation</title>
<p>Searchlight related functions in NeuroRA provide rich parameters for user customization. For each searchlight step, users can customize the size of the basic calculation unit [<italic>k</italic><sub><italic>x</italic></sub>, <italic>k</italic><sub><italic>y</italic></sub>, <italic>k</italic><sub><italic>z</italic></sub>]. Each <italic>k</italic> indicates the number of voxels along the corresponding axis. The strides between different calculation unit must be decided as [<italic>s</italic><sub><italic>x</italic></sub>, <italic>s</italic><sub><italic>y</italic></sub>, <italic>s</italic><sub><italic>z</italic></sub>]. The strides refer to how far the calculation unit is moved before another computation is made. Each <italic>s</italic> indicates how many voxels exist between two adjacent calculation units along the corresponding axis. For the fMRI data of size [<italic>X, Y, Z</italic>], the kernel size is usually set to be more than one voxel so that each voxel can exist in multiple kernels (calculation units). Therefore, <italic>N</italic> computations are required here:</p>
<disp-formula id="E3"><mml:math id="M3"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>X</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>k</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo><mml:mo>&#x000D7;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>Y</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>k</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mtext>&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;</mml:mtext><mml:mo>&#x000D7;</mml:mo><mml:mo stretchy='false'>(</mml:mo><mml:mrow><mml:mo>&#x0230A;</mml:mo><mml:mrow><mml:mfrac><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mrow><mml:mi>Z</mml:mi><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>k</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mi>z</mml:mi></mml:msub></mml:mrow></mml:mfrac></mml:mrow><mml:mo>&#x0230B;</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy='false'>)</mml:mo></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<p>This implies that <italic>N</italic> RDMs must be calculated, which are each related to the corresponding calculation unit. After obtaining searchlight RDMs, users can calculate the similarities between fMRI and other modes. In NeuroRA, the final correlation coefficient of one voxel is the mean value of the correlation coefficients calculated by all kernels that contain this voxel.</p>
<p><xref ref-type="fig" rid="F6">Figure 6B</xref> shows the steps for searchlight-based computation with corresponding functions in NeuroRA. <xref ref-type="table" rid="T2">Table 2</xref> is a script demo to understand how to conduct a searchlight-based analysis for fMRI data. We could first calculate the fMRI RDMs within each searchlight blob and then obtain similarities between fMRI RDMs and a behavioral RDM or a coding model RDM, which is constructed based on the hypothesis all over the whole brain. In a hypothesis-based RDM, values corresponding to the same condition have the highest similarity, and values corresponding to different conditions have a low similarity.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Script of searchlight representational analysis between fMRI data and a coding model in NeuroRA.</p></caption>
<table frame="hsides" rules="groups">
<tbody><tr>
<td valign="top" align="center">1</td>
<td valign="top" align="left">from neurora.rdm_cal import fmriRDM</td>
</tr>
<tr>
<td valign="top" align="center">2</td>
<td valign="top" align="left">from neurora.corr_cal_by_rdm import fmrirdms_corr</td>
</tr>
<tr>
<td valign="top" align="center">3</td>
<td valign="top" align="left">import numpy as np</td>
</tr>
<tr>
<td valign="top" align="center">4</td>
<td/>
</tr>
<tr>
<td valign="top" align="center">5</td>
<td valign="top" align="left">&#x00023; calculate the searchlight fMRI RDMs for each subject</td>
</tr>
<tr>
<td valign="top" align="center">6</td>
<td valign="top" align="left">&#x00023; the shape of fmri_data should be [n_conditions, n_subjects, nx, ny, nz]</td>
</tr>
<tr>
<td valign="top" align="center">7</td>
<td valign="top" align="left">&#x00023; nx, ny, nz represent the size of fMRI-img</td>
</tr>
<tr>
<td valign="top" align="center">8</td>
<td valign="top" align="left">&#x00023; here, the size of calculation unit is [3, 3, 3] and the strides for calculating is [1, 1, 1]</td>
</tr>
<tr>
<td valign="top" align="center">9</td>
<td valign="top" align="left">&#x00023; the shape of fmri_rdms will be [n_subjects, n_x, n_y, n_z]</td>
</tr>
<tr>
<td valign="top" align="center">10</td>
<td valign="top" align="left">&#x00023; n_x, n_y, n_z represent the number of calculation units for searchlight along the x, y, z axis.</td>
</tr>
<tr>
<td valign="top" align="center">11</td>
<td valign="top" align="left">fmri_rdms = fmriRDM(fmri_data, ksize=[3, 3, 3], strides=[1, 1, 1], sub_opt=1)</td>
</tr>
<tr>
<td valign="top" align="center">12</td>
<td/>
</tr>
<tr>
<td valign="top" align="center">13</td>
<td valign="top" align="left">&#x00023; initialize the correlation coefficients</td>
</tr>
<tr>
<td valign="top" align="center">14</td>
<td valign="top" align="left">corrs = np.zeros([n_subjects, n_x, n_y, n_z, 2], dtype=np.float)</td>
</tr>
<tr>
<td valign="top" align="center">15</td>
<td/>
</tr>
<tr>
<td valign="top" align="center">16</td>
<td valign="top" align="left">&#x00023; calculate the correlation coefficients between searchlight fMRI RDMs and a model RDM</td>
</tr>
<tr>
<td valign="top" align="center">17</td>
<td valign="top" align="left">&#x00023; the shape of model_rdm should be [n_conditions, n_conditions]</td>
</tr>
<tr>
<td valign="top" align="center">18</td>
<td valign="top" align="left">&#x00023; the shape of corrs will be [n_subjetcs, n_x, n_y, n_z, 2], 2 represents a <italic>r</italic>-value &#x00026; a <italic>p</italic>-value</td>
</tr>
<tr>
<td valign="top" align="center">19</td>
<td valign="top" align="left">for sub in range(n_subjects):</td>
</tr>
<tr>
<td valign="top" align="center">20</td>
<td valign="top" align="left">&#x000A0;&#x000A0;&#x000A0;&#x000A0;&#x000A0;corrs[sub] = fmrirdms_corr(model_rdm, fmri_rdms[sub])</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>The calculation parameters of fMRI data are ksize=[3, 3, 3] and strides=[1, 1, 1]. Users can just input different data and obtain the correlation results between two modes</italic>.</p>
</table-wrap-foot>
</table-wrap></sec>
<sec>
<title>Save Results as a NIfTI File</title>
<p>NeuroRA provides two functions in <italic>nii_save</italic> module, <italic>corr_save_nii</italic>() and <italic>stats_save_nii</italic>(), to save the similarity result or the statistical result as a NIfTI file with thresholding parameters as well. These two functions are used similarly. The former function is used for saving the results of <italic>r</italic>-values after calculating the similarities between fMRI mode and another mode. The latter function is used for saving the results of <italic>t</italic>-values after statistical analysis. <xref ref-type="table" rid="T3">Table 3</xref> is a script to help users understand how to use <italic>corr_save_nii</italic>() to save the similarity results as a NIfTI file. Users can set certain thresholds for <italic>p</italic>-values, <italic>r</italic>-values (only in <italic>corr_save_nii</italic>() function) or <italic>t</italic>-values (only in <italic>stats_save_nii</italic>() function). Also, users can select Family-Wise-Error (FWE) or False-Discovery-Rate (FDR) correction methods to control for multiple comparisons across the whole brain. Furthermore, users can choose whether to smooth the results, whether to plot automatically, etc. For example, if the threshold for <italic>p</italic>-value is set as 0.05, the final NIfTI file returned will be filtered with <italic>p</italic> &#x0003C; 0.05, and all voxels with <italic>p</italic>&#x0003E;=0.05 will be set as 0.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Script of saving the calculation results as a NIfTI file for fMRI data.</p></caption>
<table frame="hsides" rules="groups">
<tbody><tr>
<td valign="top" align="center">1</td>
<td valign="top" align="left">from neurora.nii_save import corr_save_nii</td>
</tr>
<tr>
<td valign="top" align="center">2</td>
<td/>
</tr>
<tr>
<td valign="top" align="center">3</td>
<td valign="top" align="left">&#x00023; corrs represents the similarities (correlation coefficients) between fMRI and other mode</td>
</tr>
<tr>
<td valign="top" align="center">4</td>
<td valign="top" align="left">&#x00023; the shape of corrs should be [n_x, n_y, n_z, 2]</td>
</tr>
<tr>
<td valign="top" align="center">5</td>
<td valign="top" align="left">&#x00023; filename represents the filename of the result.nii file</td>
</tr>
<tr>
<td valign="top" align="center">6</td>
<td valign="top" align="left">&#x00023; affine represents the information of the fMRI-image array data in a reference space</td>
</tr>
<tr>
<td valign="top" align="center">7</td>
<td valign="top" align="left">&#x00023; here, the size of fMRI-image is [60, 60, 60], the size of calculation unit is [3, 3, 3] and the</td>
</tr>
<tr>
<td valign="top" align="center">8</td>
<td valign="top" align="left">&#x00023; strides for calculating is [1, 1, 1]</td>
</tr>
<tr>
<td valign="top" align="center">9</td>
<td valign="top" align="left">filename = &#x0201C;demo_result.nii&#x0201D;</td>
</tr>
<tr>
<td valign="top" align="center">10</td>
<td valign="top" align="left">corr_save_nii(corrs, filename, affine, size=[60, 60, 60], size=[60, 60, 60], ksize=[3, 3, 3],</td>
</tr>
<tr>
<td valign="top" align="center">11</td>
<td valign="top" align="left">strides=[1, 1, 1], p=0.05, correct_method=&#x00027;FDR&#x00027;)</td>
</tr>
<tr>
<td valign="top" align="center">12</td>
<td/>
</tr>
<tr>
<td valign="top" align="center">13</td>
<td valign="top" align="left">&#x00023; The output is an [60, 60, 60] NumPy-array</td>
</tr>
<tr>
<td valign="top" align="center">14</td>
<td valign="top" align="left">&#x00023; And a.nii file named &#x00027;demo-results.nii&#x00027; has been generated</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>Users can get the correlation results based on the script in <xref ref-type="table" rid="T2">Table 2</xref>. The NIfTI file can be obtained by entering some necessary parameters</italic>.</p>
</table-wrap-foot>
</table-wrap></sec></sec>
<sec>
<title>Other Representational Analysis</title>
<p>In addition to RSA, users can conduct the analysis of NPS, STPS, and ISC with NeuroRA. Detailed implementation of these analysis methods can be seen in <xref ref-type="supplementary-material" rid="SM5">Supplementary Information for the Implementation of Analysis Methods</xref>. Our toolkits have separate modules to conduct these calculations (<xref ref-type="table" rid="T4">Table 4</xref>). Just like RSA from multiple modalities, the calculations for these other representational analysis methods support EEG-like data as well as fMRI data. Users can calculate the results for each channel or region, each time-window from a time series, each ROI or searchlight blobs (for fMRI) as they wish by selecting different functions and setting specific parameters. These calculations are used in a similar way to calculate RDM or RSA, as described in the above sections. In detail, <xref ref-type="supplementary-material" rid="SM1">Supplementary Table 1</xref> shows the shape of the input, the parameter settings with calculation implementation, the corresponding shape of the output, and recommended next steps in the analysis. Additionally, users can use <italic>help</italic>() (a built-in function in Python) to see and understand the detailed description of the purpose of the specific function or module.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Modules and functions for NPS, STPS, ISC in NeuroRA.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Analysis Method</bold></th>
<th valign="top" align="left"><bold>Module for Computing</bold></th>
<th valign="top" align="left"><bold>Functions for Statistical Analysis</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">NPS</td>
<td valign="top" align="left"><italic>neurora.nps_cal</italic> module</td>
<td valign="top" align="left">For EEG-like:<break/><italic>neurora.stats_cal.stats</italic>()<break/>For fMRI:<break/><italic>neurora.stats_cal.stats_fmri</italic>()</td>
</tr>
<tr>
<td valign="top" align="left">STPS</td>
<td valign="top" align="left"><italic>neurora.stps_cal</italic> module</td>
<td valign="top" align="left">For EEG-like:<break/><italic>neurora.stats_cal.stats_stps</italic>()<break/>For fMRI:<break/><italic>neurora.stats_cal.stats_stpsfmri</italic>()</td>
</tr>
<tr>
<td valign="top" align="left">ISC</td>
<td valign="top" align="left"><italic>neurora.isc_cal</italic> module</td>
<td valign="top" align="left">For EEG-like:<break/><italic>bneurora.stats_cal.stats</italic>()<break/>For fMRI like:<break/><italic>neurora.stats_iscfmri</italic>()</td>
</tr>
</tbody>
</table>
</table-wrap></sec>
<sec>
<title>Statistical Analysis</title>
<p>NeuroRA provides functions for statistical analysis based on the representational analysis results. The inputs are the similarity maps for each subject, which can be obtained by functions in calculation modules (<italic>corr_cal, corr_cal_by_rdm, nps_cal, stps_cal</italic>, and <italic>isc_cal</italic> modules), and the output will be the statistical results (a <italic>t</italic>-value &#x00026; <italic>p</italic>-value map) (<xref ref-type="table" rid="T5">Table 5</xref>). The output from the functions of calculation modules always includes an <italic>r</italic>-value map and a <italic>p</italic>-value map. Although only the <italic>r</italic>-value map is used for subject-level statistical analysis, users can directly input the output of functions in calculation modules as the input of functions in <italic>stats_cal</italic> module for convenience.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Example of statistical analysis for channel-time based EEG RSA calculation and searchlight fMRI RSA calculation.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Type of Calculation</bold></th>
<th valign="top" align="left"><bold>Example Script</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><italic>channel-time based EEG-like calculation</italic></td>
<td valign="top" align="left">from neurora.corr_cal import bhvANDeeg_corr<break/> from neurora.stats_cal import stats <break/>&#x00023; calculate the correlation coefficients between behavioral data and EEG data corrs=bhvANDeeg_corr(bhv_data, eeg_data, sub_opt=1, chl_opt=1, time_opt=1) <break/>&#x00023; the shape of corrs should be [n_subs, n_chls, n_ts, 2] stats(corrs, permutation=True, iter=1000) <break/>&#x00023; The output is an [n_chls, n_ts, 2] NumPy-array <break/>&#x00023; 2 represents a <italic>t</italic>-value and a <italic>p</italic>-value</td>
</tr>
<tr>
<td valign="top" align="left"><italic>searchlight fMRI calculation</italic></td>
<td valign="top" align="left">from neurora.corr_cal import bhvANDfmri_corr from neurora.stats_cal import stats_fmri <break/>&#x00023; calculate the correlation coefficients between behavioral data and fMRI data corrs=bhvANDfmri_corr(bhv_data, eeg_data, sub_opt=1, chl_opt=1, time_opt=1) <break/>&#x00023; the shape of corrs should be [n_subs, n_x, n_y, n_z, 2] stats_fmri(corrs, permutation=True, iter=10000) <break/>&#x00023; The output is an [n_x, n_y, n_z, 2] NumPy-array</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In this part, the correlation coefficients calculated by calculation modules are tested against zero for significance. Besides, we add a permutation test to all processes of statistical analysis. This means the statistical significance could be assessed through a permutation test by randomly shuffling the data and calculated the results for many iterations (for example, 5000) to draw a distribution. Real data exceeding 95% of the distribution are regarded as significant. <xref ref-type="table" rid="T5">Table 5</xref> is a script to show how to use <italic>stats_cal</italic> module to conduct statistical analysis for RSA results from different modes. Users can independently choose to use the permutation test or not and change the iteration number by set parameters in related functions.</p></sec>
<sec>
<title>Visualization of Results</title>
<p>NeuroRA provides several functions to visualize the results in <italic>rsa_plot</italic> module. Some typical features are shown in <xref ref-type="fig" rid="F7">Figure 7</xref>.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Examples of visualizations implemented in NeuroRA. Left-top: Plot the RDM by function <italic>plot_rdm()</italic> and <italic>plot_rdm_withvalue</italic>(). Right-top: Plot the results by time sequence by function <italic>plot_corrs_by_time</italic>() and <italic>plot_corrs_by_hotmap</italic>(). Left-down: Plot fMRI results as 2-D slices by function <italic>plot_brainrsa_montage</italic>(). Right-down: Plot fMRI results as surface in the 3-D space by function <italic>plot_brainrsa_surface</italic>().</p></caption>
<graphic xlink:href="fninf-14-563669-g0007.tif"/>
</fig>
<p>The basic option is to visualize RDMs by function <italic>plot_rdm</italic>() or <italic>plot_rdm_withvalue</italic>(). The more advanced option for EEG-like data is to visualize the results across different time points. On the one hand, users can use specific functions, <italic>plot_corrs_by_time</italic>() and <italic>plot_tbytsim_withstats</italic>(), to plot the curve. On the other hand, users can use specific functions, <italic>plot_corrs_hotmap</italic>(), <italic>plot_corrs_hotmap_stats</italic>() (for <italic>r</italic>-values), <italic>plot_stats_hotmap</italic>() (for <italic>t</italic>-values) and <italic>plot_nps_hotmap</italic>() (for NPS), to plot the hotmap. Also, NeuroRA has options for plotting fMRI results on a brain. Users can use functions such as <italic>plot_brainrsa_glass</italic>(), <italic>plot_brainrsa_montage</italic>() and <italic>plot_brainrsa_regions</italic>() to plot fMRI results as 2-D slices, and use <italic>plot_brainrsa_surface</italic>() to plot results as surfaces in the 3-D space. Feature and applicability of functions in <italic>rsa_plot</italic> module are shown in <xref ref-type="table" rid="T6">Table 6</xref>. The implementation of visualization requires the Pyplot module in the Matplotlib and nilearn package.</p>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p>Feature and applicability of functions for plotting results in NeuroRA.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th/>
<th valign="top" align="left"><bold>Function</bold></th>
<th valign="top" align="left"><bold>Feature and Applicability</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">for RDM</td>
<td valign="top" align="left"><italic>plot_rdm</italic>()</td>
<td valign="top" align="left">Plot the RDM<break/> - The input should be an RDM (N_conditions&#x000D7;N_conditions).</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_rdm_withvallue</italic>()</td>
<td valign="top" align="left">Plot the RDM with values<break/> - The input should be an RDM (N_conditions&#x000D7;N_conditions).</td>
</tr>
<tr>
<td valign="top" align="left">for EEG-like</td>
<td valign="top" align="left"><italic>plot_corrs_by_time</italic>()</td>
<td valign="top" align="left">Plot the correlation coefficients for different conditions by time sequence<break/> - The input should be a matrix (N_conditions&#x000D7;N_time-points) of correlation coefficients.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_tbytsim_withstats</italic>()</td>
<td valign="top" align="left">Plot the similarity averaging all subjects by time sequence with statistical results<break/> - The input should be a matrix (N_subs&#x000D7;N_time-points) of similarities.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_corrs_hotmap</italic>()</td>
<td valign="top" align="left">Plot the hotmap of correlation coefficients for channels/regions by time sequence<break/> - The input should be a matrix (N_channels&#x000D7;N_time-points) of correlation coefficients.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_corrs_hotmap_stats</italic>()</td>
<td valign="top" align="left">Plot the hotmap of correlation coefficients for channels/regions by time sequence with the significant outline<break/> - The input should be a matrix (N_channels&#x000D7;N_time-points) of correlation coefficients and a matrix (N_channels&#x000D7;N_time-points &#x000D7;2) of <italic>t</italic>-values and <italic>p</italic>-values.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_stats_hotmap</italic>()</td>
<td valign="top" align="left">Plot the hotmap of statistical results for channels/regions by time sequence<break/> - The input should be a matrix (N_channels&#x000D7;N_time-points &#x000D7;2) of <italic>t</italic>-values and <italic>p</italic>-values.</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_nps_hotmap</italic>()</td>
<td valign="top" align="left">Plot the hotmap of NPS for channels/regions by time sequence<break/> - The input should be a matrix (N_channels&#x000D7;N_time-points) of similarities.</td>
</tr>
<tr>
<td valign="top" align="left">for fMRI</td>
<td valign="top" align="left"><italic>plot_brainrsa_glass</italic>()</td>
<td valign="top" align="left">Plot the 2-D projection of the RSA-results<break/> - The input should be the.nii file generated by functions in <italic>neurora.nii_save</italic> module</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_brainrsa_montage</italic>()</td>
<td valign="top" align="left">Plot the RSA-results by different cuts<break/> - The input should be the.nii file generated by functions in <italic>neurora.nii_save</italic> module</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_brainrsa_regions</italic>()</td>
<td valign="top" align="left">Plot the high-correlation regions of RSA-results by three cuts (frontal, axial, and lateral)<break/> - The input should be the.nii file generated by functions in <italic>neurora.nii_save</italic> module</td>
</tr>
<tr>
<td/>
<td valign="top" align="left"><italic>plot_brainrsa_surface()</italic></td>
<td valign="top" align="left">Plot the RSA-results into a brain surface<break/> - The input should be the.nii file generated by functions in <italic>neurora.nii_save</italic> module</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We also provide several code demos in NeuroRA on the publicly available datasets. The first demo is based on visual-92-categories-task MEG datasets (Cichy et al., <xref ref-type="bibr" rid="B8">2014</xref>). We extracted the first three subjects&#x00027; data. <xref ref-type="fig" rid="F8">Figure 8A</xref> shows the correlation-based RDMs of three different time-points using NeuroRA [SVM-based RDMs in Cichy et al. (<xref ref-type="bibr" rid="B8">2014</xref>) for the first three subjects can be seen in <xref ref-type="supplementary-material" rid="SM7">Supplementary Figure 1A</xref>] and the temporal similarity results by comparing with the neural representations of 200 and 800 ms. There were more similar neural patterns when participants were viewing human faces (the small blue squares in RDMs), and representations of nearby times were more similar. The second demo is based on the subject2&#x00027;s data in Haxby fMRI datasets (Haxby, <xref ref-type="bibr" rid="B21">2001</xref>). <xref ref-type="fig" rid="F8">Figure 8</xref> shows the searchlight-based RSA results between an &#x0201C;animate-inanimate&#x0201D; coding model RDM and searchlight RDMs from fMRI data. The results indicated that the temporal cortex was primarily involved in coding information of animate or inanimate. The third demo is based on EEG datasets from a working memory task using NeuroRA (Bae and Luck, <xref ref-type="bibr" rid="B2">2019</xref>). We extracted the first five subjects&#x00027; event-related potentials (ERP) data. <xref ref-type="fig" rid="F8">Figure 8C</xref> shows the RSA-based decoding results by comparing a coding model RDM and temporal RDMs from EEG data (The temporal SVM-based decoding results of these five subjects can be seen in <xref ref-type="supplementary-material" rid="SM7">Supplementary Figure 1</xref>). Both orientation and position could be successfully decoding based on ERP data in a visual working memory task. In these demos, user can learn how to use NeuroRA to perform representational analysis and plot the main results, including calculating RDMs from different time points (<xref ref-type="fig" rid="F8">Figure 8A</xref>), correlations over the time series (<xref ref-type="fig" rid="F8">Figure 8A</xref>), searchlight calculation between the brain activities and an &#x0201C;animate-inanimate&#x0201D; coding model (<xref ref-type="fig" rid="F8">Figure 8B</xref>), using the hypothesis-based RDM to fit RDMs based on neural activities by time sequence (<xref ref-type="fig" rid="F8">Figure 8C</xref>) and so on (see more: <ext-link ext-link-type="uri" xlink:href="https://github.com/neurora/NeuroRA/tree/master/demo">https://github.com/neurora/NeuroRA/tree/master/demo</ext-link>). These demos contain several critical sections: loading data &#x00026; preprocessing, calculating RDMs, calculating the neural similarities or similarity matrix, and plotting. Users can download the tutorial on NeuroRA website and find further details.</p>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Demo results. <bold>(A)</bold> Left: The RDMs of 0, 100, 200, 300ms based on all 302 channels&#x00027; MEG data for the first three subjects [data from Cichy et al. (<xref ref-type="bibr" rid="B8">2014</xref>)]. Right: Use the neural representations of 200ms and 800ms to calculate the similarities with all time-points&#x00027; neural representations. <bold>(B)</bold> The searchlight results between an &#x00027;animate-inanimate&#x00027; coding model RDM and searchlight RDMs based on subject2&#x00027;s data [data from Haxby (<xref ref-type="bibr" rid="B21">2001</xref>)]. In this coding model RDM, we assume that there are consistent representations among animate objects and inanimate objects. <bold>(C)</bold> The RSA-based decoding results for orientation and position by calculating the correlation coefficients between a coding model RDM and EEG RDMs by time sequence based on the first five subjects&#x00027; data in experiment 2 [data from Bae and Luck (<xref ref-type="bibr" rid="B2">2019</xref>)]. In this coding model RDM, we assume that a large difference between the corresponding two angles corresponds to high dissimilarity, and vice versa. In the two rightmost plots, the small orange rectangles inside the plotting area and orange shadow indicate <italic>p</italic> &#x0003C; 0.05; line width reflects &#x000B1; SEM.</p></caption>
<graphic xlink:href="fninf-14-563669-g0008.tif"/>
</fig></sec>
<sec>
<title>User Support</title>
<p>To report any bugs in the code or submit any queries or suggestions about our toolbox, users can use the issue tracker on GitHub: <ext-link ext-link-type="uri" xlink:href="https://github.com/neurora/NeuroRA/issues">https://github.com/neurora/NeuroRA/issues</ext-link>. We will reply and act accordingly as soon as possible.</p></sec></sec>
<sec sec-type="discussion" id="s7">
<title>Discussion</title>
<p>RSA provides a novel way of observing big data, which is powerful in the field of cognitive neuroscience. An increasing number of studies have used such multivariate analysis to obtain novel information that could not be observed through univariate analysis (Mahmoudi et al., <xref ref-type="bibr" rid="B35">2012</xref>; Sui et al., <xref ref-type="bibr" rid="B48">2012</xref>; Haxby et al., <xref ref-type="bibr" rid="B22">2014</xref>). More importantly, experimental data obtained from different modalities must be assessed simultaneously, and RSA is a suitable method way to quantitatively compare results from different modalities with distinctive dimensions, resolutions and even obtained from different species (Salmela et al., <xref ref-type="bibr" rid="B45">2016</xref>; Cichy and Pantazis, <xref ref-type="bibr" rid="B7">2017</xref>).</p>
<p>In the present study, we have developed a Python-based toolbox that can perform representation analysis for neural data from many different modalities. Compared with other toolkits or modules that can also implement RSA, our toolbox has a much wider application and more convenient and rich functionalities that users can use tiny codes to conduct not only RSA but also NPS, STPS, ISC, statistical analysis, and visualization, especially for the analysis of multi-modal data and cross-modal comparisons. Moreover, it is open-source, free to use, and cross-platform.</p>
<p>For detailed information on each module and function in our toolbox, including the format of input data, the choice of parameters, and the format of output data, users can refer to our toolbox tutorial. To further understand the specific implementation of each function in the toolbox, people can read the source code directly. If users encounter any problems or difficulties during use, they can consult NeuroRA&#x00027;s tutorials and email our developers.</p>
<p>Although we already implemented the essential functions for representational analysis, there are still several limitations to be addressed in the future. First, NeuroRA is not yet designed to process the raw data. However, users can utilize other toolboxes such as EEGLAB (Delorme and Makeig, <xref ref-type="bibr" rid="B9">2004</xref>), MNE (Gramfort et al., <xref ref-type="bibr" rid="B15">2013</xref>), and Nibabel (Brett et al., <xref ref-type="bibr" rid="B4">2016</xref>) to import data and transfer them into a format fit for NeuroRA. We plan to develop an integrated format conversion function in the next version. Second, there is still significant room for improving the computational performance of NeuroRA, especially in the iterative calculation of fMRI data, which is often slow. This is due to nested loops in the code structure when we need to traverse the data from the entire brain and iterate the random shuffle many times. In the future, we will reduce the time by optimizing functions with GPUs and using some multithreaded methods to accelerate some computing processes. Third, there is no graphical user interface (GUI) right now, which we plan to design and implement based on PyQt in a future version. Users could then obtain the final results by dragging the data file to a specific location in the GUI with the mouse and filling in the relevant parameters. Fourth, object-oriented programming methods can also be applied to our toolkit development. We can build some classes with some public methods requiring the visualization or statistical analysis parameters and some private methods for data management of the multidimensional matrices hidden from the user. Fifth, we need to add some features for the plotting part, such as returning the matplotlib object, assembling subplots and saving them instead of displaying plots on screen only. Sixth, we hope to provide a more straightforward version by streamlining the full analysis workflow building on existing functions. After simplifying the intermediate process, users don&#x00027;t need to call other functions to do extra operations for data transformation. Finally, although we added unit tests in our toolbox, the tests available assess only the shapes of the output corresponding to different inputs rather than check the correctness of the computations. The work to add them will be an important part of NeuroRA&#x00027;s future development.</p>
<p>Through NeuroRA, for the first time, we provide a method for researchers to perform representation analysis with neural data from many different modalities. However, this is only a starting point. With the development of the algorithm and applications for representational analysis, we will include new functionalities, such as using the classification-based decoding accuracy between neural activities under two conditions as the value in an RDM (Cichy et al., <xref ref-type="bibr" rid="B8">2014</xref>; Cichy and Pantazis, <xref ref-type="bibr" rid="B7">2017</xref>; Xie et al., <xref ref-type="bibr" rid="B53">2020</xref>) and automatically generating RDMs for each layer in a deep convolutional neural network, as well as new visualization tools which can plot the space representation of neural activities with t-SNE and show the dynamic representational analysis results. We hope that many exciting findings can be observed through our toolbox, and we would like to collaborate with researchers interested in this tool to improve the toolbox further.</p></sec>
<sec id="s8">
<title>Information Sharing Statement</title>
<p>NeuroRA is available at <ext-link ext-link-type="uri" xlink:href="https://pypi.org/project/neurora/">https://pypi.org/project/neurora/</ext-link>. The website for NeuroRA is <ext-link ext-link-type="uri" xlink:href="https://neurora.github.io/NeuroRA/">https://neurora.github.io/NeuroRA/</ext-link>, and the tutorial of the toolbox can be download here. Also, users can read online API documentation on <ext-link ext-link-type="uri" xlink:href="https://neurora.github.io/documentation/">https://neurora.github.io/documentation/</ext-link>. The code for our toolbox NeuroRA can be accessed on GitHub: <ext-link ext-link-type="uri" xlink:href="https://github.com/neurora/NeuroRA">https://github.com/neurora/NeuroRA</ext-link>.</p></sec>
<sec sec-type="data-availability-statement" id="s9">
<title>Data Availability Statement</title>
<p>The original contributions presented in the study are included in the NeuroRA repository (<ext-link ext-link-type="uri" xlink:href="https://github.com/neurora/NeuroRA">https://github.com/neurora/NeuroRA</ext-link>), further inquiries can be directed to the corresponding author/s.</p></sec>
<sec id="s10">
<title>Author Contributions</title>
<p>ZL and YK conceived the research, analyze the data, and wrote the paper. ZL coded the toolbox. YK supervised the research.</p></sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec>
</body>
<back>
<ack><p>This manuscript has been released as a pre-print at bioRxiv (ZL &#x00026; YK).</p>
</ack><sec sec-type="supplementary-material" id="s11">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fninf.2020.563669/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fninf.2020.563669/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.pdf" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Table_2.pdf" id="SM2" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Table_3.pdf" id="SM3" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Presentation_1.pdf" id="SM4" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Presentation_2.pdf" id="SM5" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Presentation_3.pdf" id="SM6" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="Image_1.pdf" id="SM7" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/></sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alfred</surname> <given-names>K. L.</given-names></name> <name><surname>Connolly</surname> <given-names>A. C.</given-names></name> <name><surname>Kraemer</surname> <given-names>D. J. M.</given-names></name></person-group> (<year>2018</year>). <article-title>Putting the pieces together: generating a novel representational space through deductive reasoning</article-title>. <source>NeuroImage</source> <volume>183</volume>, <fpage>99</fpage>&#x02013;<lpage>111</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2018.07.062</pub-id><pub-id pub-id-type="pmid">30081195</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bae</surname> <given-names>G. Y.</given-names></name> <name><surname>Luck</surname> <given-names>S. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Dissociable decoding of spatial attention and working memory from EEG oscillations and sustained potentials</article-title>. <source>J. Neurosci.</source> <volume>38</volume>, <fpage>409</fpage>&#x02013;<lpage>422</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.2860-17.2017</pub-id><pub-id pub-id-type="pmid">29167407</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonner</surname> <given-names>M. F.</given-names></name> <name><surname>Epstein</surname> <given-names>R. A.</given-names></name></person-group> (<year>2018</year>). <article-title>Computational mechanisms underlying cortical responses to the affordance properties of visual scenes</article-title>. <source>PLoS Computat Biol.</source> <volume>14</volume>:<fpage>e1006111</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006111</pub-id><pub-id pub-id-type="pmid">29684011</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Brett</surname> <given-names>M.</given-names></name> <name><surname>Hanke</surname> <given-names>M.</given-names></name> <name><surname>Cipollini</surname> <given-names>B.</given-names></name> <name><surname>C&#x000F4;t&#x000E9;</surname> <given-names>M.-A.</given-names></name> <name><surname>Markiewicz</surname> <given-names>C.</given-names></name> <name><surname>Gerhard</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2016</year>). <source>Nibabel: Access a Cacophony of Neuro-Imaging File Formats, Version 2.1. 0</source>. <publisher-loc>Zenodo</publisher-loc>.</citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cavanagh</surname> <given-names>S. E.</given-names></name> <name><surname>Towers</surname> <given-names>J. P.</given-names></name> <name><surname>Wallis</surname> <given-names>J. D.</given-names></name> <name><surname>Hunt</surname> <given-names>L. T.</given-names></name> <name><surname>Kennerley</surname> <given-names>S. W.</given-names></name></person-group> (<year>2018</year>). <article-title>Reconciling persistent and dynamic hypotheses of working memory coding in prefrontal cortex</article-title>. <source>Nat. Commun.</source> <volume>9</volume>:<fpage>3498</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-018-05873-3</pub-id><pub-id pub-id-type="pmid">30158519</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Shimotake</surname> <given-names>A.</given-names></name> <name><surname>Matsumoto</surname> <given-names>R.</given-names></name> <name><surname>Kunieda</surname> <given-names>T.</given-names></name> <name><surname>Kikuchi</surname> <given-names>T.</given-names></name> <name><surname>Miyamoto</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>The &#x02018;when&#x02019; and &#x02018;where&#x02019; of semantic coding in the anterior temporal lobe: Temporal representational similarity analysis of electrocorticogram data</article-title>. <source>Cortex</source> <volume>79</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.cortex.2016.02.015</pub-id><pub-id pub-id-type="pmid">27085891</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cichy</surname> <given-names>R. M.</given-names></name> <name><surname>Pantazis</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>Multivariate pattern analysis of MEG and EEG: a comparison of representational structure in time and space</article-title>. <source>NeuroImage</source> <volume>158</volume>, <fpage>441</fpage>&#x02013;<lpage>454</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2017.07.023</pub-id><pub-id pub-id-type="pmid">28716718</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cichy</surname> <given-names>R. M.</given-names></name> <name><surname>Pantazis</surname> <given-names>D.</given-names></name> <name><surname>Oliva</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Resolving human object recognition in space and time</article-title>. <source>Nat. Neurosci.</source> <volume>17</volume>, <fpage>455</fpage>&#x02013;<lpage>462</lpage>. <pub-id pub-id-type="doi">10.1038/nn.3635</pub-id><pub-id pub-id-type="pmid">24464044</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Delorme</surname> <given-names>A.</given-names></name> <name><surname>Makeig</surname> <given-names>S.</given-names></name></person-group> (<year>2004</year>). <article-title>EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis</article-title>. <source>J. Neurosci. Methods</source> <volume>134</volume>, <fpage>9</fpage>&#x02013;<lpage>21</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2003.10.009</pub-id><pub-id pub-id-type="pmid">15102499</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Efron</surname> <given-names>B.</given-names></name> <name><surname>Tibshirani</surname> <given-names>R. J.</given-names></name></person-group> (<year>1994</year>). <source>An Introduction to the Bootstrap</source>. <publisher-loc>Boca Raton</publisher-loc>: <publisher-name>CRC press</publisher-name>.</citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eickenberg</surname> <given-names>M.</given-names></name> <name><surname>Gramfort</surname> <given-names>A.</given-names></name> <name><surname>Varoquaux</surname> <given-names>G.</given-names></name> <name><surname>Thirion</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>Seeing it all: convolutional network layers map the function of the human visual system</article-title>. <source>NeuroImage</source> <volume>152</volume>, <fpage>184</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2016.10.001</pub-id><pub-id pub-id-type="pmid">27777172</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Etzel</surname> <given-names>J. A.</given-names></name> <name><surname>Courtney</surname> <given-names>Y.</given-names></name> <name><surname>Carey</surname> <given-names>C. E.</given-names></name> <name><surname>Gehred</surname> <given-names>M. Z.</given-names></name> <name><surname>Agrawal</surname> <given-names>A.</given-names></name> <name><surname>Braver</surname> <given-names>T. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Pattern similarity analyses of frontoparietal task coding: individual variation and genetic influences</article-title>. <source>Cereb. Cortex</source> <volume>30</volume>, <fpage>3167</fpage>&#x02013;<lpage>3183</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhz301</pub-id><pub-id pub-id-type="pmid">32086524</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Evans</surname> <given-names>S.</given-names></name> <name><surname>Davis</surname> <given-names>M. H.</given-names></name></person-group> (<year>2015</year>). <article-title>Hierarchical organization of auditory and motor representations in speech perception: evidence from searchlight similarity analysis</article-title>. <source>Cerebral Cortex</source> <volume>25</volume>, <fpage>4772</fpage>&#x02013;<lpage>4788</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhv136</pub-id><pub-id pub-id-type="pmid">26157026</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Feng</surname> <given-names>C.</given-names></name> <name><surname>Yan</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>W.</given-names></name> <name><surname>Han</surname> <given-names>S.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name></person-group> (<year>2018</year>). <article-title>Neural representations of the multidimensional self in the cortical midline structures</article-title>. <source>NeuroImage</source> <volume>183</volume>, <fpage>291</fpage>&#x02013;<lpage>299</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2018.08.018</pub-id><pub-id pub-id-type="pmid">30118871</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gramfort</surname> <given-names>A.</given-names></name> <name><surname>Luessi</surname> <given-names>M.</given-names></name> <name><surname>Larson</surname> <given-names>E.</given-names></name> <name><surname>Engemann</surname> <given-names>D. A.</given-names></name> <name><surname>Strohmeier</surname> <given-names>D.</given-names></name> <name><surname>Brodbeck</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>MEG and EEG data analysis with MNE-Python</article-title>. <source>Front. Neurosci.</source> <volume>7</volume>:<fpage>267</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2013.00267</pub-id><pub-id pub-id-type="pmid">24431986</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greene</surname> <given-names>M. R.</given-names></name> <name><surname>Hansen</surname> <given-names>B. C.</given-names></name></person-group> (<year>2018</year>). <article-title>Shared spatiotemporal category representations in biological and artificial deep neural networks</article-title>. <source>PLoS Computat. Biol.</source> <volume>14</volume>:<fpage>e1006327</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1006327</pub-id><pub-id pub-id-type="pmid">30040821</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>G&#x000FC;&#x000E7;l,&#x000FC;</surname> <given-names>U.</given-names></name> <name><surname>van Gerven</surname> <given-names>M. A.</given-names></name></person-group> (<year>2015</year>). <article-title>Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream</article-title>. <source>J. Neurosci.</source> <volume>35</volume>, <fpage>10005</fpage>&#x02013;<lpage>10014</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.5023-14.2015</pub-id><pub-id pub-id-type="pmid">26157000</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hall-McMaster</surname> <given-names>S.</given-names></name> <name><surname>Muhle-Karbe</surname> <given-names>P. S.</given-names></name> <name><surname>Myers</surname> <given-names>N. E.</given-names></name> <name><surname>Stokes</surname> <given-names>M. G.</given-names></name></person-group> (<year>2019</year>). <article-title>Reward boosts neural coding of task rules to optimize cognitive flexibility</article-title>. <source>J. Neurosci.</source> <volume>39</volume>, <fpage>8549</fpage>&#x02013;<lpage>8561</lpage>. <pub-id pub-id-type="doi">10.1523/JNEUROSCI.0631-19.2019</pub-id><pub-id pub-id-type="pmid">31519820</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hanke</surname> <given-names>M.</given-names></name> <name><surname>Halchenko</surname> <given-names>Y. O.</given-names></name> <name><surname>Sederberg</surname> <given-names>P. B.</given-names></name> <name><surname>Hanson</surname> <given-names>S. J.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Pollmann</surname> <given-names>S.</given-names></name></person-group> (<year>2009</year>). <article-title>PyMVPA: a python toolbox for multivariate pattern analysis of fMRI data</article-title>. <source>Neuroinformatics</source> <volume>7</volume>, <fpage>37</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1007/s12021-008-9041-y</pub-id><pub-id pub-id-type="pmid">19184561</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasson</surname> <given-names>U.</given-names></name> <name><surname>Nir</surname> <given-names>Y.</given-names></name> <name><surname>Levy</surname> <given-names>I.</given-names></name> <name><surname>Fuhrmann</surname> <given-names>G.</given-names></name> <name><surname>Malach</surname> <given-names>R.</given-names></name></person-group> (<year>2004</year>). <article-title>Intersubject synchronization of cortical activity during natural vision</article-title>. <source>Science</source> <volume>303</volume>, <fpage>1634</fpage>&#x02013;<lpage>1640</lpage>. <pub-id pub-id-type="doi">10.1126/science.1089506</pub-id><pub-id pub-id-type="pmid">15016991</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name></person-group> (<year>2001</year>). <article-title>Distributed and overlapping representations of faces and objects in ventral temporal cortex</article-title>. <source>Science</source> <volume>293</volume>, <fpage>2425</fpage>&#x02013;<lpage>2430</lpage>. <pub-id pub-id-type="doi">10.1126/science.1063736</pub-id><pub-id pub-id-type="pmid">11577229</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haxby</surname> <given-names>J. V.</given-names></name> <name><surname>Connolly</surname> <given-names>A. C.</given-names></name> <name><surname>Guntupalli</surname> <given-names>J. S.</given-names></name></person-group> (<year>2014</year>). <article-title>Decoding neural representational spaces using multivariate pattern analysis</article-title>. <source>Annual Rev. Neurosci.</source> <volume>37</volume>, <fpage>435</fpage>&#x02013;<lpage>456</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-neuro-062012-170325</pub-id><pub-id pub-id-type="pmid">25002277</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Henriksson</surname> <given-names>L.</given-names></name> <name><surname>Mur</surname> <given-names>M.</given-names></name> <name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <article-title>Rapid invariant encoding of scene layout in human OPA</article-title>. <source>Neuron</source> <volume>103</volume>, <fpage>161</fpage>&#x02013;<lpage>171</lpage>.e3. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.04.014</pub-id><pub-id pub-id-type="pmid">31097360</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hunter</surname> <given-names>J. D.</given-names></name></person-group> (<year>2007</year>). <article-title>Matplotlib: a 2D graphics environment</article-title>. <source>Comput. Sci. Eng.</source> <volume>9</volume>, <fpage>90</fpage>&#x02013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2007.55</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>M. K.</given-names></name> <name><surname>Raye</surname> <given-names>C. L.</given-names></name> <name><surname>Mitchell</surname> <given-names>K. J.</given-names></name> <name><surname>Greene</surname> <given-names>E. J.</given-names></name> <name><surname>Cunningham</surname> <given-names>W. A.</given-names></name> <name><surname>Sanislow</surname> <given-names>C. A.</given-names></name></person-group> (<year>2005</year>). <article-title>Using fMRI to investigate</article-title>. <source>Cogn. Affect. Behav. Neurosci.</source> <volume>5</volume>, <fpage>339</fpage>&#x02013;<lpage>361</lpage>. <pub-id pub-id-type="doi">10.3758/CABN.5.3.339</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jordan</surname> <given-names>M. I.</given-names></name> <name><surname>Mitchell</surname> <given-names>T. M.</given-names></name></person-group> (<year>2015</year>). <article-title>Machine learning: trends, perspectives, and prospects</article-title>. <source>Science</source>, <volume>349</volume>, <fpage>255</fpage>&#x02013;<lpage>260</lpage>. <pub-id pub-id-type="doi">10.1126/science.aaa8415</pub-id><pub-id pub-id-type="pmid">26185243</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khaligh-Razavi</surname> <given-names>S.-M.</given-names></name> <name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name></person-group> (<year>2014</year>). <article-title>Deep supervised, but not unsupervised, models may explain IT cortical representation</article-title>. <source>PLoS Comput. Biol.</source> <volume>10</volume>:<fpage>e1003915</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1003915</pub-id><pub-id pub-id-type="pmid">25375136</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Koepke</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <source>Why Python Rocks for RESEARCh.</source> <publisher-name>Hacker Monthly</publisher-name> <fpage>8</fpage>.</citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name></person-group> (<year>2008</year>). <article-title>Representational similarity analysis - connecting the branches of systems neuroscience</article-title>. <source>Front. Syst. Neurosci</source>. <volume>2</volume>:<fpage>4</fpage>. <pub-id pub-id-type="doi">10.3389/neuro.06.004.2008</pub-id><pub-id pub-id-type="pmid">19104670</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name> <name><surname>Golan</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural network models and deep learning</article-title>. <source>Curr. Biol.</source> <volume>29</volume>, <fpage>R231</fpage>&#x02013;<lpage>R236</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2019.02.034</pub-id><pub-id pub-id-type="pmid">30939301</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name> <name><surname>Mur</surname> <given-names>M.</given-names></name> <name><surname>Ruff</surname> <given-names>D. A.</given-names></name> <name><surname>Kiani</surname> <given-names>R.</given-names></name> <name><surname>Bodurka</surname> <given-names>J.</given-names></name> <name><surname>Esteky</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2008</year>). <article-title>Matching categorical object representations in inferior temporal cortex of man and monkey</article-title>. <source>Neuron</source> <volume>60</volume>, <fpage>1126</fpage>&#x02013;<lpage>1141</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2008.10.043</pub-id><pub-id pub-id-type="pmid">19109916</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuzovkin</surname> <given-names>I.</given-names></name> <name><surname>Vicente</surname> <given-names>R.</given-names></name> <name><surname>Petton</surname> <given-names>M.</given-names></name> <name><surname>Lachaux</surname> <given-names>J.-P.</given-names></name> <name><surname>Baciu</surname> <given-names>M.</given-names></name> <name><surname>Kahane</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex</article-title>. <source>Commun. Biol.</source> <volume>1</volume>:<fpage>107</fpage>. <pub-id pub-id-type="doi">10.1038/s42003-018-0110-y</pub-id><pub-id pub-id-type="pmid">30271987</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lawrence</surname> <given-names>S. J. D.</given-names></name> <name><surname>Formisano</surname> <given-names>E.</given-names></name> <name><surname>Muckli</surname> <given-names>L.</given-names></name> <name><surname>de Lange</surname> <given-names>F. P.</given-names></name></person-group> (<year>2019</year>). <article-title>Laminar fMRI: applications for cognitive neuroscience</article-title>. <source>NeuroImage</source> <volume>197</volume>, <fpage>785</fpage>&#x02013;<lpage>791</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2017.07.004</pub-id><pub-id pub-id-type="pmid">28687519</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Xue</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Spatiotemporal neural pattern similarity supports episodic memory</article-title>. <source>Curr. Biol.</source> <volume>25</volume>, <fpage>780</fpage>&#x02013;<lpage>785</lpage>. <pub-id pub-id-type="doi">10.1016/j.cub.2015.01.055</pub-id><pub-id pub-id-type="pmid">25728695</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mahmoudi</surname> <given-names>A.</given-names></name> <name><surname>Takerkart</surname> <given-names>S.</given-names></name> <name><surname>Regragui</surname> <given-names>F.</given-names></name> <name><surname>Boussaoud</surname> <given-names>D.</given-names></name> <name><surname>Brovelli</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Multivoxel pattern analysis for fMRI data: a review</article-title>. <source>Computat. Mathemat. Methods Med.</source> <volume>2012</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1155/2012/961257</pub-id><pub-id pub-id-type="pmid">23401720</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Marr</surname> <given-names>D.</given-names></name></person-group> (<year>1982</year>). <source>Vision: A Computational Investigation into the Human Representation and Processing of Visual Information</source>. <publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>W. H. Freeman</publisher-name>.</citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muukkonen</surname> <given-names>I.</given-names></name> <name><surname>&#x000D6;lander</surname> <given-names>K.</given-names></name> <name><surname>Numminen</surname> <given-names>J.</given-names></name> <name><surname>Salmela</surname> <given-names>V. R.</given-names></name></person-group> (<year>2020</year>). <article-title>Spatio-temporal dynamics of face perception</article-title>. <source>NeuroImage</source> <volume>209</volume>:<fpage>116531</fpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2020.116531</pub-id><pub-id pub-id-type="pmid">31931156</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nili</surname> <given-names>H.</given-names></name> <name><surname>Wingfield</surname> <given-names>C.</given-names></name> <name><surname>Walther</surname> <given-names>A.</given-names></name> <name><surname>Su</surname> <given-names>L.</given-names></name> <name><surname>Marslen-Wilson</surname> <given-names>W.</given-names></name> <name><surname>Kriegeskorte</surname> <given-names>N.</given-names></name></person-group> (<year>2014</year>). <article-title>A toolbox for representational similarity analysis</article-title>. <source>PLoS Computat. Biol.</source> <volume>10</volume>:<fpage>e1003553</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pcbi.1003553</pub-id><pub-id pub-id-type="pmid">24743308</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Norman</surname> <given-names>K. A.</given-names></name> <name><surname>Polyn</surname> <given-names>S. M.</given-names></name> <name><surname>Detre</surname> <given-names>G. J.</given-names></name> <name><surname>Haxby</surname> <given-names>J. V.</given-names></name></person-group> (<year>2006</year>). <article-title>Beyond mind-reading: multi-voxel pattern analysis of fMRI data</article-title>. <source>Trends Cogn. Sci.</source> <volume>10</volume>, <fpage>424</fpage>&#x02013;<lpage>430</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2006.07.005</pub-id><pub-id pub-id-type="pmid">16899397</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pedregosa</surname> <given-names>F.</given-names></name> <name><surname>Varoquaux</surname> <given-names>G.</given-names></name> <name><surname>Gramfort</surname> <given-names>A.</given-names></name> <name><surname>Michel</surname> <given-names>V.</given-names></name> <name><surname>Thirion</surname> <given-names>B.</given-names></name> <name><surname>Grisel</surname> <given-names>O.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Scikit-learn: machine learning in Python</article-title>. <source>J. Machine Learn. Res.</source> <volume>12</volume>, <fpage>2825</fpage>&#x02013;<lpage>2830</lpage>.</citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peirce</surname> <given-names>J. W.</given-names></name></person-group> (<year>2007</year>). <article-title>PsychoPy-psychophysics software in python</article-title>. <source>J. Neurosci. Methods</source> <volume>162</volume>, <fpage>8</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2006.11.017</pub-id><pub-id pub-id-type="pmid">17254636</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2012</year>). <article-title>The future of fMRI in cognitive neuroscience</article-title>. <source>NeuroImage</source> <volume>62</volume>, <fpage>1216</fpage>&#x02013;<lpage>1220</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.08.007</pub-id><pub-id pub-id-type="pmid">21856431</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ritchie</surname> <given-names>J. B.</given-names></name> <name><surname>Bracci</surname> <given-names>S.</given-names></name> <name><surname>Op de Beeck</surname> <given-names>H.</given-names></name></person-group> (<year>2017</year>). <article-title>Avoiding illusory effects in representational similarity analysis: what (not) to do with the diagonal</article-title>. <source>NeuroImage</source> <volume>148</volume>, <fpage>197</fpage>&#x02013;<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2016.12.079</pub-id><pub-id pub-id-type="pmid">28069538</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosen</surname> <given-names>B. R.</given-names></name> <name><surname>Savoy</surname> <given-names>R. L.</given-names></name></person-group> (<year>2012</year>). <article-title>fMRI at 20: has it changed the world?</article-title> <source>NeuroImage</source> <volume>62</volume>, <fpage>1316</fpage>&#x02013;<lpage>1324</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2012.03.004</pub-id><pub-id pub-id-type="pmid">22433659</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salmela</surname> <given-names>V.</given-names></name> <name><surname>Salo</surname> <given-names>E.</given-names></name> <name><surname>Salmi</surname> <given-names>J.</given-names></name> <name><surname>Alho</surname> <given-names>K.</given-names></name></person-group> (<year>2016</year>). <article-title>Spatiotemporal dynamics of attention networks revealed by representational similarity analysis of EEG and fMRI</article-title>. <source>Cereb. Cortex</source> <volume>28</volume>, <fpage>549</fpage>&#x02013;<lpage>560</lpage>. <pub-id pub-id-type="doi">10.1093/cercor/bhw389</pub-id><pub-id pub-id-type="pmid">27999122</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanner</surname> <given-names>M. F.</given-names></name></person-group> (<year>1999</year>). <article-title>Python: a programming language for software integration and development</article-title>. <source>J. Mol. Graph. Modell.</source> <volume>17</volume>, <fpage>57</fpage>&#x02013;<lpage>61</lpage>.<pub-id pub-id-type="pmid">10660911</pub-id></citation></ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stolier</surname> <given-names>R. M.</given-names></name> <name><surname>Freeman</surname> <given-names>J. B.</given-names></name></person-group> (<year>2016</year>). <article-title>Neural pattern similarity reveals the inherent intersection of social categories</article-title>. <source>Nat. Neurosci.</source> <volume>19</volume>, <fpage>795</fpage>&#x02013;<lpage>797</lpage>. <pub-id pub-id-type="doi">10.1038/nn.4296</pub-id><pub-id pub-id-type="pmid">27135216</pub-id></citation></ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sui</surname> <given-names>J.</given-names></name> <name><surname>Adali</surname> <given-names>T.</given-names></name> <name><surname>Yu</surname> <given-names>Q.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Calhoun</surname> <given-names>V. D.</given-names></name></person-group> (<year>2012</year>). <article-title>A review of multivariate methods for multi-modal fusion of brain imaging data</article-title>. <source>J. Neurosci. Methods</source> <volume>204</volume>, <fpage>68</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2011.10.031</pub-id><pub-id pub-id-type="pmid">22108139</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Urgen</surname> <given-names>B. A.</given-names></name> <name><surname>Pehlivan</surname> <given-names>S.</given-names></name> <name><surname>Saygin</surname> <given-names>A. P.</given-names></name></person-group> (<year>2019</year>). <article-title>Distinct representations in occipito-temporal, parietal, and premotor cortex during action perception revealed by fMRI and computational modeling</article-title>. <source>Neuropsychologia</source> <volume>127</volume>, <fpage>35</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuropsychologia.2019.02.006</pub-id><pub-id pub-id-type="pmid">30772426</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van der Walt</surname> <given-names>S.</given-names></name> <name><surname>Colbert</surname> <given-names>S. C.</given-names></name> <name><surname>Varoquaux</surname> <given-names>G.</given-names></name></person-group> (<year>2011</year>). <article-title>The NumPy array: a structure for efficient numerical computation</article-title>. <source>Comput. Sci. Eng.</source> <volume>13</volume>, <fpage>22</fpage>&#x02013;<lpage>30</lpage>. <pub-id pub-id-type="doi">10.1109/MCSE.2011.37</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Virtanen</surname> <given-names>P.</given-names></name> <name><surname>Gommers</surname> <given-names>R.</given-names></name> <name><surname>Oliphant</surname> <given-names>T. E.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>SciPy 1.0: fundamental algorithms for scientific computing in Python</article-title>. <source>Nat. Methods</source> <volume>17</volume>, <fpage>261</fpage>&#x02013;<lpage>272</lpage>. <pub-id pub-id-type="doi">10.1038/s41592-019-0686-2</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Zeng</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Ling</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Representational similarity analysis reveals task-dependent semantic influence of the visual word form area</article-title>. <source>Sci. Rep.</source> <volume>8</volume>:<fpage>3047</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-018-21062-0</pub-id><pub-id pub-id-type="pmid">29445098</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xie</surname> <given-names>S.</given-names></name> <name><surname>Kaiser</surname> <given-names>D.</given-names></name> <name><surname>Cichy</surname> <given-names>R. M.</given-names></name></person-group> (<year>2020</year>). <article-title>Visual imagery and perception share representations in the alpha frequency band</article-title>. <source>Curr. Biol.</source> <volume>30</volume>, <fpage>2621</fpage>&#x02013;<lpage>2627</lpage>.e5. <pub-id pub-id-type="doi">10.1016/j.cub.2020.04.074</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>G.</given-names></name> <name><surname>Dong</surname> <given-names>Q.</given-names></name> <name><surname>Chen</surname> <given-names>C.</given-names></name> <name><surname>Lu</surname> <given-names>Z.</given-names></name> <name><surname>Mumford</surname> <given-names>J. A.</given-names></name> <name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2010</year>). <article-title>Greater neural pattern similarity across repetitions is associated with better memory</article-title>. <source>Science</source> <volume>330</volume>, <fpage>97</fpage>&#x02013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1126/science.1193125</pub-id><pub-id pub-id-type="pmid">20829453</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yamins</surname> <given-names>D. L. K.</given-names></name> <name><surname>Hong</surname> <given-names>H.</given-names></name> <name><surname>Cadieu</surname> <given-names>C. F.</given-names></name> <name><surname>Solomon</surname> <given-names>E. A.</given-names></name> <name><surname>Seibert</surname> <given-names>D.</given-names></name> <name><surname>DiCarlo</surname> <given-names>J. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Performance-optimized hierarchical models predict neural responses in higher visual cortex</article-title>. <source>Proc. Natl. Acad. Sci. U.S.A.</source> <volume>111</volume>, <fpage>8619</fpage>&#x02013;<lpage>8624</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.1403112111</pub-id><pub-id pub-id-type="pmid">24812127</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname> <given-names>C.</given-names></name> <name><surname>Su</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Xu</surname> <given-names>T.</given-names></name> <name><surname>Yin</surname> <given-names>D.</given-names></name> <name><surname>Fan</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Multivariate neural representations of value during reward anticipation and consummation in the human orbitofrontal cortex</article-title>. <source>Sci. Rep.</source> <volume>6</volume>:<fpage>29079</fpage>. <pub-id pub-id-type="doi">10.1038/srep29079</pub-id><pub-id pub-id-type="pmid">27378417</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yokoi</surname> <given-names>A.</given-names></name> <name><surname>Diedrichsen</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Neural organization of hierarchical motor sequence representations in the human neocortex</article-title>. <source>Neuron</source> <volume>103</volume>, <fpage>1178</fpage>&#x02013;<lpage>1190</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2019.06.017</pub-id><pub-id pub-id-type="pmid">31345643</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work was supported by the National Social Science Foundation of China (17ZDA323), the Shanghai Committee of Science and Technology (19ZR1416700), and the Hundred Top Talents Program from Sun Yat-sen University.</p>
</fn>
</fn-group>
</back>
</article>