<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2019.00045</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>How Does the Degree of Valence Influence Affective Auditory P300-Based BCIs?</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Onishi</surname> <given-names>Akinari</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/370075/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Nakagawa</surname> <given-names>Seiji</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/211227/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Center for Frontier Medical Engineering, Chiba University</institution>, <addr-line>Chiba</addr-line>, <country>Japan</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Medical Engineering, Graduate School of Engineering, Chiba University</institution>, <addr-line>Chiba</addr-line>, <country>Japan</country></aff>
<aff id="aff3"><sup>3</sup><institution>University Hospital Med-Tech Link Center, Chiba University</institution>, <addr-line>Chiba</addr-line>, <country>Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Gerwin Schalk, Wadsworth Center, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Fabien Lotte, Institut National de Recherche en Informatique et en Automatique (INRIA), France; Dan Zhang, Tsinghua University, China</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Akinari Onishi <email>a-onishi&#x00040;chiba-u.jp</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Neuroprosthetics, a section of the journal Frontiers in Neuroscience</p></fn></author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>02</month>
<year>2019</year>
</pub-date>
<pub-date pub-type="collection">
<year>2019</year>
</pub-date>
<volume>13</volume>
<elocation-id>45</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>08</month>
<year>2018</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>01</month>
<year>2019</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2019 Onishi and Nakagawa.</copyright-statement>
<copyright-year>2019</copyright-year>
<copyright-holder>Onishi and Nakagawa</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>A brain-computer interface (BCI) translates brain signals into commands for the control of devices and for communication. BCIs enable persons with disabilities to communicate externally. Positive and negative affective sounds have been introduced to P300-based BCIs; however, how the degree of valence (e.g., very positive or positive) influences the BCI has not been investigated. To further examine the influence of affective sounds in P300-based BCIs, we applied sounds with five degrees of valence to the P300-based BCI. The sound valence ranged from very negative to very positive, as determined by Scheffe&#x00027;s method. The effect of sound valence on the BCI was evaluated by waveform analyses, followed by the evaluation of offline stimulus-wise classification accuracy. As a result, the late component of P300 showed significantly higher point-biserial correlation coefficients in response to very positive and very negative sounds than in response to the other sounds. The offline stimulus-wise classification accuracy was estimated from a region-of-interest. The analysis showed that the very negative sound achieved the highest accuracy and the very positive sound achieved the second highest accuracy, suggesting that the very positive sound and the very negative sound may be required to improve the accuracy.</p></abstract>
<kwd-group>
<kwd>BCI</kwd>
<kwd>BMI</kwd>
<kwd>P300</kwd>
<kwd>EEG</kwd>
<kwd>affective stimulus</kwd>
</kwd-group>
<contract-sponsor id="cn001">Japan Society for the Promotion of Science<named-content content-type="fundref-id">10.13039/501100001691</named-content></contract-sponsor>
<counts>
<fig-count count="7"/>
<table-count count="2"/>
<equation-count count="2"/>
<ref-count count="41"/>
<page-count count="8"/>
<word-count count="6249"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>A brain-computer interface (BCI), also referred to as a brain-machine interface (BMI), translates human brain signals into commands that can be used to control assistive devices or communicate externally with others (Wolpaw et al., <xref ref-type="bibr" rid="B37">2000</xref>, <xref ref-type="bibr" rid="B38">2002</xref>). For instance, BCIs are used to help persons with disabilities interact with the external world (Birbaumer and Cohen, <xref ref-type="bibr" rid="B1">2007</xref>). BCIs decode brain signals obtained by invasive measurements such as electrocorticography or by non-invasive measurements such as scalp electroencephalography (EEG) (Pfurtscheller et al., <xref ref-type="bibr" rid="B26">2008</xref>). One of the most successful BCIs utilizes EEG signals in response to stimuli &#x02013; i.e., event-related potentials (ERPs). In particular, when a subject discriminates a rarely encountered stimulus from a frequently encountered stimulus, the ERP elicits a positive peak at around 300 ms from stimulus onset; this peak is named the P300 component. ERPs with P300 have been applied to BCIs, and such a system is called a P300-based BCI (Farwell and Donchin, <xref ref-type="bibr" rid="B6">1988</xref>).</p>
<p>Early studies of the P300-based BCI commonly used visual stimuli. For instance, Farwell and Donchin (<xref ref-type="bibr" rid="B6">1988</xref>) proposed a visual P300-based BCI speller that intensified letters in a 6 &#x000D7; 6 matrix by row or by column (row-column paradigm). Townsend et al. (<xref ref-type="bibr" rid="B34">2010</xref>) proposed an improved P300 speller that highlighted letters in randomized flashing patterns derived from a checkerboard, which was called the checkerboard paradigm. Ikegami et al. (<xref ref-type="bibr" rid="B14">2014</xref>) developed a region-based two-step P300 speller that could use the Japanese Hiragana syllable, and took advantage of a green/blue flicker and region selection. In addition, facial images (e.g., famous faces and smiling faces) were applied to visual P300-based BCIs, which resulted in innovative improvement (Kaufmann et al., <xref ref-type="bibr" rid="B18">2011</xref>; Jin et al., <xref ref-type="bibr" rid="B15">2012</xref>, <xref ref-type="bibr" rid="B16">2014</xref>). Although visual P300-based BCIs have been improved, and have shown to be effective in clinical studies, they still depend on the eye condition and can be affected by whether the eyes are open or closed, limited eye gaze, or limited sight. Therefore, it is important that other sensory modalities be studied and improved upon.</p>
<p>In addition to visual stimuli, other modalities, for example auditory stimuli, have been studied. In an early study of auditory P300-based BCIs, auditory stimuli, such as the sound of the word &#x0201C;yes,&#x0201D; &#x0201C;no,&#x0201D; &#x0201C;pass,&#x0201D; and &#x0201C;end&#x0201D; were assessed in a BCI developed by Sellers and Donchin (<xref ref-type="bibr" rid="B30">2006</xref>). Klobassa et al. (<xref ref-type="bibr" rid="B19">2009</xref>) examined the use of bell, bass, ring, thud, chord, and buzz sounds. Moreover, by pairing numbers with letters using a visual support matrix, Furdea et al. (<xref ref-type="bibr" rid="B8">2009</xref>) succeeded in spelling letters by counting audibly pronounced numbers. In addition, beep sounds were used as auditory stimuli and the effects of pitch, duration, and sound source direction were assessed by Halder et al. (<xref ref-type="bibr" rid="B10">2010</xref>). H&#x000F6;hne et al. (<xref ref-type="bibr" rid="B12">2010</xref>) developed a two-dimensional BCI that varied in sound pitch and location of sound source. Although the auditory BCI is advantageous because it is independent of eye gaze, auditory BCIs have shown worse performance than visual BCIs in several studies (Sellers and Donchin, <xref ref-type="bibr" rid="B30">2006</xref>; Wang et al., <xref ref-type="bibr" rid="B36">2015</xref>), and it is clear that improvements to the auditory BCI are required. Thus, we focused on improving auditory stimuli for BCIs in the current study.</p>
<p>The auditory P300-based BCI can be improved by using sophisticated stimuli. Schreuder et al. (<xref ref-type="bibr" rid="B29">2010</xref>) demonstrated that presenting sounds from five speakers in different locations resulted in better BCI performance than that obtained using a single speaker. Additionally, H&#x000F6;hne et al. (<xref ref-type="bibr" rid="B11">2012</xref>) evaluated spoken or sung syllables as natural auditory stimuli, and compared to the artificial tones, found that the use of natural stimuli improved the users&#x00027; ergonomic ratings and the classification performance of the BCI. Simon et al. (<xref ref-type="bibr" rid="B31">2014</xref>) also applied natural stimuli of animal sounds. Guo et al. (<xref ref-type="bibr" rid="B9">2010</xref>) employed sound involving spatial and gender properties together with discriminating properties of sounds (active mental task), which served to enhance the late positive component (LPC) and N2. Recently, Huang et al. (<xref ref-type="bibr" rid="B13">2018</xref>) explored the use of dripping sounds and found that the BCI classification accuracy was higher than when beeping sounds were used. We previously applied sounds with two degrees of valence (positive and negative) to the auditory P300-based BCI (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>). We confirmed the enhancement of the late component of P300 in response to those stimuli. However, how degrees of valence (e.g., very positive or positive) influence the P300-based BCI remains unknown.</p>
<p>This study aimed to clarify how degrees of valence in sounds influence the auditory P300-based BCIs. Five sounds with different degrees of valence were applied to the P300-based BCI: very negative, negative, neutral, positive, and very positive. We hypothesized that the valence should exceed a certain degree because the amplitude of P300 was not in proportion to the emotion (Steinbeis et al., <xref ref-type="bibr" rid="B32">2006</xref>). Since the auditory P300 BCI requires a larger amount of training data, and can cause fatigue when applied to the BCI separately, we applied these sounds to the BCI together. The influence caused by the sound valences was then analyzed offline with cross-validation. To confirm the valence of those sounds, the Scheffe&#x00027;s method of paired comparison was applied. We also performed a waveform analysis using the point-biserial correlation coefficient to reveal ERP components that contributed to the classification. Based on the waveform analysis, a region-of-interest (ROI) was identified, and then specially designed cross-validation was performed to estimate offline stimulus-wise classification accuracy. The online performance of the BCI and its preliminary feature extraction process has previously been demonstrated; however, the effect of these affective sounds was not evaluated (Onishi and Nakagawa, <xref ref-type="bibr" rid="B24">2018</xref>). Therefore, in the current study, the influence of affective auditory stimuli on the BCI was evaluated based on the waveform analysis and the stimulus-wise classification accuracy. In contrast to our previous study (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>), which revealed whether affective sound is effective to the BCI, the current study aimed to clarify how degrees of valence influence the BCI. The name of the late component seen in our previous study (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>) is not consistent within the literature; P300, P3b, P600, late positive component, or late positive complex are used (Finnigan et al., <xref ref-type="bibr" rid="B7">2002</xref>). In the current study, we used the term late component of P300 in reference to those ERP components.</p></sec>
<sec sec-type="methods" id="s2">
<title>2. Methods</title>
<sec>
<title>2.1. Subjects</title>
<p>Eighteen healthy subjects aged 20.6 &#x000B1; 0.8 (9 females and 9 males) participated in this study. All participants were right-handed as assessed by the Japanese version of the Edinburgh Handedness Inventory (Oldfield, <xref ref-type="bibr" rid="B23">1971</xref>). All subjects gave written informed consent before the experiment. This experiment was approved by the Internal Ethics Committee at Chiba University and conducted in accordance with the approved guidelines.</p></sec>
<sec>
<title>2.2. Stimuli</title>
<p>Five cat sounds, representing five different valences (very negative, negative, neutral, positive, and very positive), were prepared. Sounds were cut to 500 ms, then rises and falls of the sounds were linearly faded in and out. The root-mean-square of each sound was equalized. Detailed conditions of sound processing were presented in <xref ref-type="table" rid="T1">Table 1</xref>. The sound was emitted through ATH-M20x headphones (Audio-Technica Co., Japan) via audio interface UCA222 (Behringer GmbH, Germany).</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Properties of affective sounds.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>No</bold>.</th>
<th valign="top" align="left"><bold>Valence label</bold></th>
<th valign="top" align="left"><bold>Sound name</bold></th>
<th valign="top" align="left"><bold>Source</bold></th>
<th valign="top" align="center"><bold>Trimming sample</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1</td>
<td valign="top" align="left">Very negative</td>
<td valign="top" align="left">cat-fight</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="http://taira-komori.jpn.org/">http://taira-komori.jpn.org/</ext-link></td>
<td valign="top" align="center">64,910</td>
</tr>
<tr>
<td valign="top" align="left">2</td>
<td valign="top" align="left">Negative</td>
<td valign="top" align="left">Cat_Meowing_2- Mr_Smith-780889994</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="http://soundbible.com/">http://soundbible.com/</ext-link></td>
<td valign="top" align="center">12,000</td>
</tr>
<tr>
<td valign="top" align="left">3</td>
<td valign="top" align="left">Neutral</td>
<td valign="top" align="left">cat6</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="http://pocket-se.info/">http://pocket-se.info/</ext-link></td>
<td valign="top" align="center">2,769</td>
</tr>
<tr>
<td valign="top" align="left">4</td>
<td valign="top" align="left">Positive</td>
<td valign="top" align="left">catvoice</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="http://taira-komori.jpn.org/">http://taira-komori.jpn.org/</ext-link></td>
<td valign="top" align="center">882</td>
</tr>
<tr>
<td valign="top" align="left">5</td>
<td valign="top" align="left">Very positive</td>
<td valign="top" align="left">kitty</td>
<td valign="top" align="left"><ext-link ext-link-type="uri" xlink:href="http://pocket-se.info/">http://pocket-se.info/</ext-link></td>
<td valign="top" align="center">112</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Degrees of valence for these sounds were rated and verified using Ura&#x00027;s variation of the Scheffe&#x00027;s method (Scheff&#x000E9;, <xref ref-type="bibr" rid="B28">1952</xref>; Nagasawa, <xref ref-type="bibr" rid="B22">2002</xref>). We used this method instead of visual analog scale because it provides more reliable ratings using the paired comparison of sounds. Specifically, a computer first randomly selected two sounds (sounds A and B) out of the five sounds. Second, a subject listened to sound A, and then sound B, only once. After listening to sounds A and B, the subject rated which sounded more positive by reporting &#x000B1;3 (&#x0002B;3: B is very positive, 0: neutral, &#x02212;3: very negative). Participants answered the degrees of valence for a total of 20 sound pairs. The degrees of valence were statistically tested using the analysis of variance (ANOVA). The ANOVA was modeled for Ura&#x00027;s variation of the Scheffe&#x00027;s method, which contains factors of the average of ratings, the individual difference of the ratings, the combination effect, the average of the order effect, and the individual difference of the order effect. Note that the main objective is to reveal the main effect of the averaged ratings, and the others are optional factors. See more detail in (Nagasawa, <xref ref-type="bibr" rid="B22">2002</xref>). The analysis was followed by a comparison of the differences of ratings between each pair using a 99% confidence interval.</p></sec>
<sec>
<title>2.3. EEG Recording</title>
<p>EEG signals were recorded using MEB-2312 (Nihon-Kohden, Japan). EEG electrodes were placed at C3, Cz, C4, P3, Pz, P4, O1, and O2 (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>) according to the international 10&#x02013;20 system. The ground electrode was at the forehead and the reference electrodes were at the mastoids. A hardware bandpass filter (0.1&#x02013;50 Hz) and a notch filter (50 Hz) were applied by the EEG system. The EEG was digitized using a USB-6341 (National Instruments Inc., USA). The sampling rate of the EEG analysis was 256 Hz. Data acquisition, stimulation, and signal processing were completed using MATLAB (Mathworks Inc., USA).</p>
<p>The experiment consisted of a training part and a testing part (see <xref ref-type="fig" rid="F1">Figure 1</xref>). Each part contained five sessions. In each session, participants completed five runs. At the beginning of the run, participants were instructed to silently count the number of occurrences of a particular target sound, which was emitted in sequence with other non-target sounds (see <xref ref-type="fig" rid="F2">Figure 2</xref>). Five sounds were then provided in pseudo-random order and each stimulus was repeated 10 times. The stimulus onset asynchrony was 500 ms. EEG signals were recorded during the task. In the testing sessions, outputs were estimated by analyzing EEG signals, and the output was fed back to the subject. For calculating the feedback, smoothing (4 sample window), Savitzky-Golay filter (5th order, 81 sample window), and downsampling (64 Hz) were applied before the classification by the stepwise linear discriminant analysis (SWLDA) (Krusienski et al., <xref ref-type="bibr" rid="B21">2006</xref>). The online classification accuracy was 84.1% (Onishi and Nakagawa, <xref ref-type="bibr" rid="B24">2018</xref>). Between sessions, subjects were asked to take a rest for a few minutes (depending on tiredness). Each sound was selected as a target once in every session. To avoid the effect of tiredness, we applied five sounds together to the BCI, then effects of valences were analyzed offline.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Structure of the experiment. EEG data were recorded during the training and testing parts. Each part consisted of five sessions. In every session, five runs were conducted. Participants were asked to select a sound during a run (see <xref ref-type="fig" rid="F2">Figure 2</xref>).</p></caption>
<graphic xlink:href="fnins-13-00045-g0001.tif"/>
</fig>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Procedure and the mental task used during a run. The BCI system first asks the participant to count the appearance of a sound (e.g., 2: negative cat sound). All five sounds are then presented in a pseudo-random order. Participants are required to count silently when the designated sound is emitted. At the end of the run, the system estimates which sound was counted based on the EEG signal recorded during the task. The estimated output was fed back to the subject only in the test runs.</p></caption>
<graphic xlink:href="fnins-13-00045-g0002.tif"/>
</fig></sec>
<sec>
<title>2.4. Waveform Analysis</title>
<p>Averaged ERP waveforms for each sound were estimated from the ERP data recorded during the training and testing runs during which a sound was set as the target. The non-target averaged waveforms for each sound were also estimated. All data were preprocessed by applying the baseline removal estimated from &#x02013;100 to 0 ms waveforms; a software bandpass filter (Butterworth, 6th order, 0.1&#x02013;20 Hz) was also applied. ERPs that exceeded 80&#x003BC;V were removed.</p>
<p>In order to reveal which ERP components contributed to the EEG classification, the point-biserial correlation coefficients (<italic>r</italic><sup>2</sup> values) were estimated (Tate, <xref ref-type="bibr" rid="B33">1954</xref>; Blankertz et al., <xref ref-type="bibr" rid="B2">2011</xref>; Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>). The point-biserial correlation coefficient is defined as follows:</p>
<disp-formula id="E1"><label>(1)</label><mml:math id="M1"><mml:mi>r</mml:mi><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msqrt><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:msqrt></mml:mrow><mml:mrow><mml:msub><mml:mi>N</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>+</mml:mo><mml:msub><mml:mi>N</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow></mml:mfrac><mml:mfrac><mml:mrow><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mn>2</mml:mn></mml:msub><mml:mo>&#x02212;</mml:mo><mml:msub><mml:mi>&#x003BC;</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:mrow><mml:mi>&#x003C3;</mml:mi></mml:mfrac><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where <italic>N</italic><sub>2</sub> and <italic>N</italic><sub>1</sub> indicate the number of data in target (2) and non-target (1) classes, &#x003BC;<sub>2</sub> and &#x003BC;<sub>1</sub> are the mean value of target and non-target, and the &#x003C3; denotes the standard deviation of a sample in a channel. The point-biserial correlation coefficient is equivalent to Pearson correlation between the amplitude of the ERP (continuous measured variable) and classes (dichotomous variable). The squared r (<italic>r</italic><sup>2</sup>) was used for the waveform analysis. It becomes higher as the difference of mean values between classes is larger and the standard deviation is smaller. We used the method instead of traditional ERP component statistics because it provides rich spatio-temporal information. The <italic>r</italic><sup>2</sup> values were evaluated using a test of no correlation and <italic>p</italic> values were corrected using Bonferroni&#x00027;s method. If the <italic>r</italic><sup>2</sup> value was not significant, the value was presented as a zero.</p></sec>
<sec>
<title>2.5. Classification</title>
<p>The offline stimulus-wise classification accuracy (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>) was computed using stimulus-wise leave-one-out cross-validation (LOOCV). First, training and testing runs, in which a sound was designated as a target, were selected from all 50 runs. Therefore, ten runs were selected in total by the procedure. Second, one run was selected as a testing run and the others as training runs. Third, a supervised classifier was trained on the ERP data during the training runs and then the test data were classified as correct or incorrect. The above two processes were repeated for all 10 runs. The classification accuracy for a sound was calculated as the percentage of correct answers during the 10 runs. We employed this LOOCV to evaluate the influence of each stimulus from the limited amount of data.</p>
<p>We applied baseline removal estimated from &#x02013;100 to 0 ms waveforms and a software bandpass filter (Butterworth, 6th order, 0.1&#x02013;20 Hz). They were not downsampled in order to compare the results of waveform analysis and classification. Then they were vectorized before applying classification. We used the SWLDA classifier (Krusienski et al., <xref ref-type="bibr" rid="B21">2006</xref>). In summary, given the weight vector of SWLDA <bold>w</bold> and the preprocessed EEG data of <italic>i</italic>-th stimulus number in <italic>s</italic>-th stimulus sequence (repetition) <bold>x</bold><sub><italic>s, i</italic></sub>, the output &#x000EE; can be estimated as</p>
<disp-formula id="E2"><label>(2)</label><mml:math id="M2"><mml:mover accent='true'><mml:mi>i</mml:mi><mml:mo>&#x0005E;</mml:mo></mml:mover><mml:mo>=</mml:mo><mml:munder><mml:mrow><mml:mi>arg</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>max</mml:mi></mml:mrow><mml:mrow><mml:mi>i</mml:mi><mml:mo>&#x02208;</mml:mo><mml:mi>I</mml:mi></mml:mrow></mml:munder><mml:mstyle displaystyle='true'><mml:munderover><mml:mo>&#x02211;</mml:mo><mml:mrow><mml:mi>s</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mi>S</mml:mi></mml:munderover><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>w</mml:mi></mml:mstyle></mml:mstyle><mml:mo>&#x000B7;</mml:mo><mml:msub><mml:mstyle mathvariant='bold' mathsize='normal'><mml:mi>x</mml:mi></mml:mstyle><mml:mrow><mml:mi>s</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>,</mml:mo></mml:math></disp-formula>
<p>where <italic>S</italic>&#x02208;{1, 2, &#x02026;, 10} denotes the maximum number of stimuli used during the offline analysis, and <italic>I</italic>&#x02208;{1, 2, &#x02026;, 5} indicates the list of stimulus numbers (see <xref ref-type="table" rid="T1">Table 1</xref>). The offline stimulus-wise classification accuracy was calculated <italic>&#x00023;correctruns</italic>/<italic>&#x00023;totalruns</italic> fixing <italic>S</italic>. The thresholds of the stepwise method were <italic>p</italic><sub>in</sub> &#x0003D; 0.1 and <italic>p</italic><sub>out</sub> &#x0003D; 0.15. Training data that exceeded 80&#x003BC;V were removed. Testing data that showed over 80&#x003BC;V amplitude were set to zero to reduce the influence of outliers. The stimulus-wise classification accuracy was statistically tested by the two-way repeated-measures ANOVA, where the factors are the type of stimulus and the number of stimulus sequences (repetitions).</p>
<p>To identify how the components seen in the waveform analysis contributed to the classification, we calculated the stimulus-wise classification accuracy by applying a ROI. The ROI analysis is especially used in fMRI studies to clarify which area of the brain was activated (Brett et al., <xref ref-type="bibr" rid="B3">2002</xref>; Poldrack, <xref ref-type="bibr" rid="B27">2007</xref>). Since ERP studies focus on ERP components spread over channels, spatio-temporal ROI selection was applied in this study. Specifically, the ROI in this study was set to C3, Cz, and C4 in the 400&#x02013;700 ms. The effect caused by the ROI was the same among comparison conditions. The similar analysis has been applied in P300-based BCI studies to reveal the effect of ERP components (Guo et al., <xref ref-type="bibr" rid="B9">2010</xref>; Brunner et al., <xref ref-type="bibr" rid="B4">2011</xref>). We decided to apply the ROI to reveal the effect of sound valence in response to results of waveform analysis because the automatic feature selection does not ensure which component to select and cannot support the conclusion.</p></sec></sec>
<sec sec-type="results" id="s3">
<title>3. Results</title>
<sec>
<title>3.1. Subjective Reports</title>
<p>Since the valences of the selected sounds were not confirmed, their valences were verified using Ura&#x00027;s variation of Scheffe&#x00027;s method. <xref ref-type="fig" rid="F3">Figure 3</xref> shows the valence of each sound. Valences of stimulus 1 to 5 in <xref ref-type="table" rid="T1">Table 1</xref> were rated &#x02013;1.48, &#x02013;0.72, 0.20, 0.75, and 1.24, respectively. The rated valence was analyzed by the ANOVA. It revealed the significant main effects of the average of ratings, the individual difference of the ratings, and the combination effects (<italic>p</italic> &#x0003C; 0.01). No significant main effect was seen in factors of the average of order effect (<italic>p</italic> &#x0003D; 0.074) and the individual difference of the order effect (<italic>p</italic> &#x0003D; 0.509). <xref ref-type="table" rid="T2">Table 2</xref> shows the 99% confidence intervals of valence estimated by Scheffe method. Since any pairs do not contain zero between upper and lower limits, all valence scores were significantly different from each other (<italic>p</italic> &#x0003C; 0.01, see <xref ref-type="table" rid="T1">Table 1</xref>). The result implies that the valences of the sounds were labeled properly, and they were distributed so that they are different from each other.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>The degree of valence was measured using the Scheffe&#x00027;s method. Arrows numbered 1&#x02013;5 indicate affective scale values for each sound.</p></caption>
<graphic xlink:href="fnins-13-00045-g0003.tif"/>
</fig>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>The 99% confidence intervals of valence estimated by Scheffe method.</p></caption>
<table frame="hsides" rules="groups">
<thead><tr>
<th valign="top" align="left"><bold>Stimulus pair</bold></th>
<th valign="top" align="center"><bold>Lower limit</bold></th>
<th valign="top" align="center"><bold>Upper limit</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">1, 2</td>
<td valign="top" align="center">&#x02013;1.00</td>
<td valign="top" align="center">&#x02013;0.76</td>
</tr>
<tr>
<td valign="top" align="left">1, 3</td>
<td valign="top" align="center">&#x02013;1.93</td>
<td valign="top" align="center">&#x02013;1.68</td>
</tr>
<tr>
<td valign="top" align="left">1, 4</td>
<td valign="top" align="center">&#x02013;2.47</td>
<td valign="top" align="center">&#x02013;2.23</td>
</tr>
<tr>
<td valign="top" align="left">1, 5</td>
<td valign="top" align="center">&#x02013;2.97</td>
<td valign="top" align="center">&#x02013;2.72</td>
</tr>
<tr>
<td valign="top" align="left">2, 3</td>
<td valign="top" align="center">&#x02013;1.17</td>
<td valign="top" align="center">&#x02013;0.93</td>
</tr>
<tr>
<td valign="top" align="left">2, 4</td>
<td valign="top" align="center">&#x02013;1.72</td>
<td valign="top" align="center">&#x02013;1.47</td>
</tr>
<tr>
<td valign="top" align="left">2, 5</td>
<td valign="top" align="center">&#x02013;2.21</td>
<td valign="top" align="center">&#x02013;1.97</td>
</tr>
<tr>
<td valign="top" align="left">3, 4</td>
<td valign="top" align="center">&#x02013;0.79</td>
<td valign="top" align="center">&#x02013;0.54</td>
</tr>
<tr>
<td valign="top" align="left">3, 5</td>
<td valign="top" align="center">&#x02013;1.28</td>
<td valign="top" align="center">&#x02013;1.04</td>
</tr>
<tr>
<td valign="top" align="left">4, 5</td>
<td valign="top" align="center">&#x02013;0.74</td>
<td valign="top" align="center">&#x02013;0.49</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>3.2. Waveform Analysis</title>
<p>The valence of the affective sounds modulated the late component of the P300 amplitude with respect to valence. <xref ref-type="fig" rid="F4">Figure 4</xref> shows the averaged target and nontarget ERP waveforms obtained at Cz for each sound (waveforms for all channels were also presented in <xref ref-type="supplementary-material" rid="SM1">Figures S1</xref>&#x02013;<xref ref-type="supplementary-material" rid="SM1">S5</xref>). The peak amplitude of the component was lowest in the neutral auditory stimulus (stimulus 3), while the peak was greatest for very negative and very positive auditory stimuli (stimulus 1 and 5, respectively). To know the contribution to the classification and the statistical analysis, the point-biserial correlation coefficient analysis was applied.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Grand-averaged ERPs for each stimulus recorded from Cz.</p></caption>
<graphic xlink:href="fnins-13-00045-g0004.tif"/>
</fig>
<p>The point-biserial correlation coefficients (<italic>r</italic><sup>2</sup> values) provided in <xref ref-type="fig" rid="F5">Figure 5</xref> indicate how each auditory stimulus in a channel contributes to the classification. A test of no correlation was applied to each value, then the <italic>r</italic><sup>2</sup> values were set to zero if the point-biserial correlation was insignificant. The <italic>r</italic><sup>2</sup> values increase around channels C3, Cz, and C4 at approximately 400&#x02013;700 ms, which corresponds to the late component of P300 (<xref ref-type="fig" rid="F4">Figure 4</xref>).</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Contribution of each feature to the classification accuracy. This contribution was estimated based on <italic>r</italic><sup>2</sup> values.</p></caption>
<graphic xlink:href="fnins-13-00045-g0005.tif"/>
</fig></sec>
<sec>
<title>3.3. Stimulus-Wise Classification Accuracy</title>
<p>The ROI for the classification was set to C3, Cz, and C4 in the 400&#x02013;700 ms since the <italic>r</italic><sup>2</sup> values indicated obvious changes induced by auditory stimuli. <xref ref-type="fig" rid="F6">Figure 6</xref> shows the stimulus-wise classification accuracy for each sound when the auditory stimuli were each presented 10 times. We found that the very negative sound demonstrated the highest accuracy, the very positive sound demonstrated the second highest accuracy, and the positive sound demonstrated the lowest accuracy. <xref ref-type="fig" rid="F7">Figure 7</xref> represents the classification accuracy for all 10 stimulus sequences. Two-way repeated-measures ANOVA revealed significant main effects of type of stimulus [<italic>F</italic><sub>(4, 68)</sub> &#x0003D; 2.82, <italic>p</italic> &#x0003C; 0.05] and the number of stimulus sequences [<italic>F</italic><sub>(9, 153)</sub> &#x0003D; 48.24, <italic>p</italic> &#x0003C; 0.001]. The <italic>post-hoc</italic> pairwise <italic>t</italic>-test revealed that the very negative sound demonstrated the highest accuracy (<italic>p</italic> &#x0003C; 0.001), and the very positive sound showed higher accuracy than the positive sound (<italic>p</italic> &#x0003C; 0.01). In summary, the very negative sound or the very positive sound showed high accuracy.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Offline stimulus-wise classification accuracy and standard error for each auditory stimulus. These were calculated based on ERP data in the ROI (400&#x02013;700 ms in C3, Cz, and C4). The number of stimulus sequences (repetitions) was fixed to 10 times.</p></caption>
<graphic xlink:href="fnins-13-00045-g0006.tif"/>
</fig>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Offline stimulus-wise classification accuracy in 1&#x02013;10 sequences using a fixed ROI.</p></caption>
<graphic xlink:href="fnins-13-00045-g0007.tif"/>
</fig></sec></sec>
<sec sec-type="discussion" id="s4">
<title>4. Discussion</title>
<p>To clarify how degrees of valence influence the auditory P300-based BCIs, we applied five affective sounds to an auditory P300-based BCI: very negative, negative, neutral, positive, and very positive. Those sounds had significantly different valence from each other. ERP analysis revealed that the very negative and very positive sounds showed high <italic>r</italic><sup>2</sup> values or the late components of P300. The very negative sound demonstrated significantly higher stimulus-wise classification accuracy than the other sounds. The very positive sound demonstrated the second highest accuracy, which was significantly higher than the positive sound. These results suggest that highly negative or highly positive affective sounds improve the accuracy. As hypothesized, a certain degree of valence is required to influence the BCI.</p>
<p>Our findings show that the very positive sound improved the stimulus-wise classification accuracy, which is in line with our previous study (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>). Additionally, we also demonstrated that the very negative sound improved this accuracy. In our previous study, the negative sound and its control demonstrated similar scores on the affective degree of valence, and therefore, no accuracy difference was found. Considering that these two sounds were scored as negative on the valence scale, the results of the current study is consistent with our previous findings.</p>
<p>This study included only one sound for each valence in order to minimize the variance caused by sounds, and to avoid fatigue. This implies that the results may be sound-specific due to the physical properties of sounds. However, it is unlikely that the effect of affective valence was sound-specific given that the stimuli used in this study were different from those used in our previous study (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>). Though different physical properties of sounds were evaluated in those two studies, the results obtained were consistent. Moreover, the P300 amplitude is known to vary with emotional value, which further validates our findings (Johnson, <xref ref-type="bibr" rid="B17">1988</xref>; Kok, <xref ref-type="bibr" rid="B20">2001</xref>).</p>
<p>This study demonstrated that the <italic>r</italic><sup>2</sup> values for the late components of P300 displayed by the very positive or very negative sounds were highest around 400&#x02013;700 ms and centered around the Cz channel. These results were not a simple response to the modality. Wang et al. (<xref ref-type="bibr" rid="B36">2015</xref>) examined the <italic>r</italic><sup>2</sup> values associated with simple visual gray-white number intensification pronounced number sound, and their combination, and found that the auditory stimuli showed significant <italic>r</italic><sup>2</sup> values in fronto-cortical brain regions between 250 and 400 ms, which contradicts our results, while their visual stimuli enhanced the <italic>r</italic><sup>2</sup> values mainly around the occipital and parietal brain regions between 300 and 400 ms. Furthermore, they reported that auditory and visual stimuli combined enhanced <italic>r</italic><sup>2</sup> values in the large area including parietal area around 300&#x02013;450 ms. A variety of sounds have been assessed by the point-biserial correlation coefficient. The <italic>r</italic><sup>2</sup> values elicited by drums, bass, and keyboard sounds were high within the occipital, parietal, and vertex brain regions between 300 and 600 ms (Treder et al., <xref ref-type="bibr" rid="B35">2014</xref>). Dripping sounds showed Cz channel activity with <italic>r</italic><sup>2</sup> values of the P300 (250 to 400 ms) greater than those elicited by beeping sounds (Huang et al., <xref ref-type="bibr" rid="B13">2018</xref>). Japanese vowels, on the other hand, elicited Cz channel activity centered on <italic>r</italic><sup>2</sup> values, although different vowels were not presented (Chang et al., <xref ref-type="bibr" rid="B5">2013</xref>). These studies imply that the <italic>r</italic><sup>2</sup> values of non-affective auditory BCIs were different from that of affective auditory BCIs. A facial image study demonstrated that upright and inverted facial images resulted in an early peak of <italic>r</italic><sup>2</sup> values around 200 ms, although a salient peak was not found for upright facial images (Zhang et al., <xref ref-type="bibr" rid="B39">2012</xref>). A few studies have analyzed affective stimuli using the point-biserial correlation coefficient. One such study analyzed emotional facial images and found large <italic>r</italic><sup>2</sup> values at around 400&#x02013;700 ms, which is also referred to as the late component of P300 or late positive potential (LPP) (Zhao et al., <xref ref-type="bibr" rid="B40">2011</xref>). Affective sounds in our previous study resulted in a late component of P300 at around 300&#x02013;700 ms; however, this peak was centered around parietal and occipital brain regions (Onishi et al., <xref ref-type="bibr" rid="B25">2017</xref>). These findings suggest that the affective sounds show high <italic>r</italic><sup>2</sup> values of the late component of P300. Moreover, the BCI response to affective stimuli may be common amongst modalities given that BCI responses to affective auditory stimuli were similar to those of affective visual stimuli. The effects of multimodal affective stimuli have not been evaluated for use in a BCI, but they would likely elicit ERPs different from those elicited by unimodal stimuli because the brain regions with significant <italic>r</italic><sup>2</sup> values were different between different types of stimuli.</p>
<p>As the degree of valence moves away from zero, the <italic>r</italic><sup>2</sup> values rapidly increased, implying that the components contributing to classification are not simply in proportion to the valence. A similar tendency was confirmed in previous studies. Steinbeis et al. (<xref ref-type="bibr" rid="B32">2006</xref>) evaluated the effect of emotional music on ERPs, focusing on the expectancy violation of chords. The results showed that the amplitude of P300 was enhanced; however, the change was not in proportion to the expectancy. To obtain a late component of P300 with higher <italic>r</italic><sup>2</sup> values, the degree of valence may need to exceed a certain threshold.</p>
<p>The stimulus-wise classification accuracy estimated within the ROI was highest when presenting the very positive and the very negative sounds; however, accuracy was lowest when using the positive sound. These results were unexpected given the amplitudes of late component of P300 and <italic>r</italic><sup>2</sup> values. The accuracy may not directly reflect the change of peak amplitude of the component or <italic>r</italic><sup>2</sup> values since the stepwise method was applied in addition to target and non-target waveform variance.</p>
<p>Due to experimental constraints, we have estimated the stimulus-wise classification accuracy in response to the ROI using LOOCV. Therefore, we should consider the risk of overfitting because a portion of the information in the test data is used for the spatio-temporal feature selection in LOOCV. The offline stimulus-wise classification accuracy in this study was less than the online classification accuracy (84.1%). Thus, the obvious inflation of the accuracy cannot be confirmed. This tendency is in line with similar previous studies (Guo et al., <xref ref-type="bibr" rid="B9">2010</xref>; Brunner et al., <xref ref-type="bibr" rid="B4">2011</xref>). The overfitting is caused easily when the dimension is high (Blankertz et al., <xref ref-type="bibr" rid="B2">2011</xref>). We think that the current analysis is hard to suffer from overfitting because the simple linear classifier (SWLDA) is further simplified by reducing the spatio-temporal selection using ROI. Moreover, the effect of ROI was controlled in the comparison by equally applying it to all compared conditions.</p>
<p>This study employed all five sounds and evaluated specially designed cross-validation in order to reveal the effect of affective sounds with different degrees of valence. This result can be used for the sound selection for the standard auditory P300-based BCI. When designing multi-command auditory P300-based BCI, very positive or very negative sounds should be employed as much as possible to use the enhanced P300 component, while the neutral sounds should be replaced with the very positive or very negative sounds. However, the mutual effects that occur when employing varieties of sounds in the BCI must be clarified in future studies.</p>
<p>In future studies, methods for classifying auditory BCIs will be necessary. In our previous study, we evaluated the ensemble convoluted feature extraction method, which took advantage of the averaged ERPs of each sound (Onishi and Nakagawa, <xref ref-type="bibr" rid="B24">2018</xref>). Recently, an algorithm based on tensor decomposition for auditory P300-based BCI was proposed (Zink et al., <xref ref-type="bibr" rid="B41">2016</xref>). This algorithm does not require subject-specific training, which improves the utility of the BCIs. These approaches may help establish a more reliable affective auditory P300-based BCI.</p></sec>
<sec sec-type="conclusions" id="s5">
<title>5. Conclusion</title>
<p>To clarify how the degrees of valence influence the auditory P300-based BCIs, five sounds with very negative, negative, neutral, positive, and very positive sounds were applied to the P300-based BCI. The very positive and very positive sounds showed higher point-biserial correlation coefficients of late component of P300. In addition, the stimulus-wise classification accuracy of sounds is high for the very positive and very positive sounds. These results imply that the accuracy is not in proportion to the valence; However, it improved when utilizing very positive or very negative sounds.</p></sec>
<sec id="s6">
<title>Author Contributions</title>
<p>AO and SN designed the experiment. AO collected the data. AO analyzed the data. AO and SN wrote the manuscript.</p>
<sec>
<title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p></sec></sec>
</body>
<back>
<sec sec-type="supplementary-material" id="s7">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnins.2019.00045/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fnins.2019.00045/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Data_Sheet_1.PDF" id="SM1" mimetype="application/pdf" xmlns:xlink="http://www.w3.org/1999/xlink"/></sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Cohen</surname> <given-names>L. G.</given-names></name></person-group> (<year>2007</year>). <article-title>Brain-computer interfaces: communication and restoration of movement in paralysis</article-title>. <source>J. Physiol.</source> <volume>579</volume>, <fpage>621</fpage>&#x02013;<lpage>636</lpage>. <pub-id pub-id-type="doi">10.1113/jphysiol.2006.125633</pub-id><pub-id pub-id-type="pmid">17234696</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blankertz</surname> <given-names>B.</given-names></name> <name><surname>Lemm</surname> <given-names>S.</given-names></name> <name><surname>Treder</surname> <given-names>M.</given-names></name> <name><surname>Haufe</surname> <given-names>S.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K.-R.</given-names></name></person-group> (<year>2011</year>). <article-title>Single-trial analysis and classification of ERP components&#x02013;a tutorial</article-title>. <source>NeuroImage</source> <volume>56</volume>, <fpage>814</fpage>&#x02013;<lpage>825</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2010.06.048</pub-id><pub-id pub-id-type="pmid">20600976</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brett</surname> <given-names>M.</given-names></name> <name><surname>Anton</surname> <given-names>J.-L. L.</given-names></name> <name><surname>Valabregue</surname> <given-names>R.</given-names></name> <name><surname>Poline</surname> <given-names>J.-B.</given-names></name></person-group> (<year>2002</year>). <article-title>Region of interest analysis using an SPM toolbox</article-title>. <source>NeuroImage</source> <volume>16</volume>:<fpage>497</fpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(02)90013-3</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brunner</surname> <given-names>P.</given-names></name> <name><surname>Joshi</surname> <given-names>S.</given-names></name> <name><surname>Briskin</surname> <given-names>S.</given-names></name> <name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name> <name><surname>Bischof</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Does the P300 speller depend on eye gaze?</article-title> <source>J. Neural Eng.</source> <volume>7</volume>:<fpage>056013</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/7/5/056013</pub-id><pub-id pub-id-type="pmid">20858924</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Chang</surname> <given-names>M.</given-names></name> <name><surname>Makino</surname> <given-names>S.</given-names></name> <name><surname>Rutkowski</surname> <given-names>T. M.</given-names></name></person-group> (<year>2013</year>). <article-title>Classification improvement of P300 response based auditory spatial speller brain-computer interface paradigm,</article-title> in <source>IEEE Region 10 Annual International Conference, Proceedings/TENCON</source> (<publisher-loc>Xi&#x00027;an; Shaanxi</publisher-loc>). <pub-id pub-id-type="doi">10.1109/TENCON.2013.6718454</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farwell</surname> <given-names>L. and Donchin, E.</given-names></name></person-group> (<year>1988</year>). <article-title>Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials</article-title>. <source>Electroencephalogr. Clin. Neurophysiol.</source> <volume>70</volume>, <fpage>510</fpage>&#x02013;<lpage>523</lpage>. <pub-id pub-id-type="pmid">2461285</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Finnigan</surname> <given-names>S.</given-names></name> <name><surname>Humphreys</surname> <given-names>M. S.</given-names></name> <name><surname>Dennis</surname> <given-names>S.</given-names></name> <name><surname>Geffen</surname> <given-names>G.</given-names></name></person-group> (<year>2002</year>). <article-title>ERP &#x02018;old/new&#x02019; effects: memory strength and decisional factor(s)</article-title>. <source>Neuropsychologia</source> <volume>40</volume>, <fpage>2288</fpage>&#x02013;<lpage>2304</lpage>. <pub-id pub-id-type="doi">10.1016/S0028-3932(02)00113-6</pub-id><pub-id pub-id-type="pmid">12417459</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Furdea</surname> <given-names>A.</given-names></name> <name><surname>Halder</surname> <given-names>S.</given-names></name> <name><surname>Krusienski</surname> <given-names>D. J.</given-names></name> <name><surname>Bross</surname> <given-names>D.</given-names></name> <name><surname>Nijboer</surname> <given-names>F.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>K&#x000FC;bler</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>An auditory oddball (P300) spelling system for brain-computer interfaces</article-title>. <source>Psychophysiology</source> <volume>46</volume>, <fpage>617</fpage>&#x02013;<lpage>625</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.2008.00783.x</pub-id><pub-id pub-id-type="pmid">19170946</pub-id></citation></ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>J.</given-names></name> <name><surname>Gao</surname> <given-names>S.</given-names></name> <name><surname>Hong</surname> <given-names>B.</given-names></name></person-group> (<year>2010</year>). <article-title>An auditory brain-computer interface using active mental response</article-title>. <source>IEEE Trans. Neural Syst. Rehabil. Eng.</source> <volume>18</volume>, <fpage>230</fpage>&#x02013;<lpage>235</lpage>. <pub-id pub-id-type="doi">10.1109/TNSRE.2010.2047604</pub-id><pub-id pub-id-type="pmid">20388606</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Halder</surname> <given-names>S.</given-names></name> <name><surname>Rea</surname> <given-names>M.</given-names></name> <name><surname>Andreoni</surname> <given-names>R.</given-names></name> <name><surname>Nijboer</surname> <given-names>F.</given-names></name> <name><surname>Hammer</surname> <given-names>E. M.</given-names></name> <name><surname>Kleih</surname> <given-names>S. C.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>An auditory oddball brain-computer interface for binary choices</article-title>. <source>Clin. Neurophysiol.</source> <volume>121</volume>, <fpage>516</fpage>&#x02013;<lpage>523</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.11.087</pub-id><pub-id pub-id-type="pmid">20093075</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>H&#x000F6;hne</surname> <given-names>J.</given-names></name> <name><surname>Krenzlin</surname> <given-names>K.</given-names></name> <name><surname>D&#x000E4;hne</surname> <given-names>S.</given-names></name> <name><surname>Tangermann</surname> <given-names>M.</given-names></name></person-group> (<year>2012</year>). <article-title>Natural stimuli improve auditory BCIs with respect to ergonomics and performance</article-title>. <source>J. Neural Eng.</source> <volume>9</volume>:<fpage>045003</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/9/4/045003</pub-id><pub-id pub-id-type="pmid">22831919</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>H&#x000F6;hne</surname> <given-names>J.</given-names></name> <name><surname>Schreuder</surname> <given-names>M.</given-names></name> <name><surname>Blankertz</surname> <given-names>B.</given-names></name> <name><surname>Tangermann</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>Two-dimensional auditory P300 speller with predictive text system,</article-title> in <source>2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society</source> (<publisher-loc>Buenos Aires</publisher-loc>), <fpage>4185</fpage>&#x02013;<lpage>4188</lpage>.</citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>M.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Hu</surname> <given-names>D.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). <article-title>Usage of drip drops as stimuli in an auditory P300 BCI paradigm</article-title>. <source>Cogn. Neurodyn.</source> <volume>12</volume>, <fpage>85</fpage>&#x02013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1007/s11571-017-9456-y</pub-id><pub-id pub-id-type="pmid">29435089</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ikegami</surname> <given-names>S.</given-names></name> <name><surname>Takano</surname> <given-names>K.</given-names></name> <name><surname>Kondo</surname> <given-names>K.</given-names></name> <name><surname>Saeki</surname> <given-names>N.</given-names></name> <name><surname>Kansaku</surname> <given-names>K.</given-names></name></person-group> (<year>2014</year>). <article-title>A region-based two-step P300-based brain-computer interface for patients with amyotrophic lateral sclerosis</article-title>. <source>Clin. Neurophysiol.</source> <volume>125</volume>, <fpage>2305</fpage>&#x02013;<lpage>2312</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2014.03.013</pub-id><pub-id pub-id-type="pmid">24731767</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Allison</surname> <given-names>B. Z.</given-names></name> <name><surname>Kaufmann</surname> <given-names>T.</given-names></name> <name><surname>K&#x000FC;bler</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2012</year>). <article-title>The changing face of P300 BCIs: a comparison of stimulus changes in a P300 BCI involving faces, emotion, and movement</article-title>. <source>PLoS ONE</source>, <volume>7</volume>:<fpage>e49688</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0049688</pub-id><pub-id pub-id-type="pmid">23189154</pub-id></citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Allison</surname> <given-names>B. Z.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Cichocki</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>An ERP-based BCI using an oddball paradigm with different faces and reduced errors in critical functions</article-title>. <source>Int. J. Neural syst.</source> <volume>24</volume>:<fpage>1450027</fpage>. <pub-id pub-id-type="doi">10.1142/S0129065714500270</pub-id><pub-id pub-id-type="pmid">25182191</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>R.</given-names></name></person-group> (<year>1988</year>). <article-title>The amplitude of the P300 component of the event-related potential: review and synthesis</article-title>. <source>Adv. Psychophysiol.</source> <volume>3</volume>, <fpage>69</fpage>&#x02013;<lpage>137</lpage>.</citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaufmann</surname> <given-names>T.</given-names></name> <name><surname>Schulz</surname> <given-names>S.</given-names></name> <name><surname>Gr&#x000FC;nzinger</surname> <given-names>C.</given-names></name> <name><surname>K&#x000FC;bler</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>Flashing characters with famous faces improves ERP-based brain&#x02013;computer interface performance</article-title>. <source>J. Neural Eng.</source> <volume>8</volume>:<fpage>056016</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/8/5/056016</pub-id><pub-id pub-id-type="pmid">21934188</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klobassa</surname> <given-names>D. S.</given-names></name> <name><surname>Vaughan</surname> <given-names>T. M.</given-names></name> <name><surname>Brunner</surname> <given-names>P.</given-names></name> <name><surname>Schwartz</surname> <given-names>N. E.</given-names></name> <name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name> <name><surname>Neuper</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2009</year>). <article-title>Toward a high-throughput auditory P300-based brain-computer interface</article-title>. <source>Clin. Neurophysiol.</source> <volume>120</volume>, <fpage>1252</fpage>&#x02013;<lpage>1261</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2009.04.019</pub-id><pub-id pub-id-type="pmid">19574091</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kok</surname> <given-names>A.</given-names></name></person-group> (<year>2001</year>). <article-title>On the utility of P300 amplitude as a measure of processing capacity</article-title>. <source>Psychophysiology</source> <volume>38</volume>:<fpage>557</fpage>&#x02013;<lpage>577</lpage>. <pub-id pub-id-type="doi">10.1017/S0048577201990559</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krusienski</surname> <given-names>D. J.</given-names></name> <name><surname>Sellers</surname> <given-names>E. W.</given-names></name> <name><surname>Bayoudh</surname> <given-names>S.</given-names></name> <name><surname>Mcfarland</surname> <given-names>D. J.</given-names></name> <name><surname>Vaughan</surname> <given-names>T. M.</given-names></name> <name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name> <etal/></person-group>. (<year>2006</year>). <article-title>A comparison of classification techniques for the P300 speller</article-title>. <source>J. Neural Eng.</source> <volume>3</volume>, <fpage>299</fpage>&#x02013;<lpage>305</lpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/3/4/007</pub-id><pub-id pub-id-type="pmid">17124334</pub-id></citation></ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nagasawa</surname> <given-names>S.</given-names></name></person-group> (<year>2002</year>). <article-title>Improvement of the Scheffe&#x00027;s method for paired comparisons</article-title>. <source>Kansei Eng. Int.</source> <volume>3</volume>, <fpage>47</fpage>&#x02013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.5057/kei.3.3_47</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oldfield</surname> <given-names>R.</given-names></name></person-group> (<year>1971</year>). <article-title>The assessment and analysis of handness: the Edinburgh inventory</article-title>. <source>Neuropsychologia</source> <volume>9</volume>, <fpage>97</fpage>&#x02013;<lpage>113</lpage>. <pub-id pub-id-type="pmid">5146491</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Onishi</surname> <given-names>A. and Nakagawa, S.</given-names></name></person-group> (<year>2018</year>). <article-title>Ensemble convoluted feature extraction for affective auditory P300 brain-computer interfaces,</article-title> in <source>Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society</source> (<publisher-loc>Honolulu, HI</publisher-loc>).</citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Onishi</surname> <given-names>A.</given-names></name> <name><surname>Takano</surname> <given-names>K.</given-names></name> <name><surname>Kawase</surname> <given-names>T.</given-names></name> <name><surname>Ora</surname> <given-names>H.</given-names></name> <name><surname>Kansaku</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Affective stimuli for an auditory P300 brain-computer interface</article-title>. <source>Front. Neurosci.</source> <volume>11</volume>:<fpage>522</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2017.00522</pub-id><pub-id pub-id-type="pmid">28983235</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Mueller-Putz</surname> <given-names>G. R.</given-names></name> <name><surname>Scherer</surname> <given-names>R.</given-names></name> <name><surname>Neuper</surname> <given-names>C.</given-names></name></person-group> (<year>2008</year>). <article-title>Rehabilitation with brain-computer interface systems</article-title>. <source>Computer</source> <volume>41</volume>, <fpage>58</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2008.432</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poldrack</surname> <given-names>R. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Region of interest analysis for fMRI</article-title>. <source>Soc. Cogn. Affect. Neurosci.</source> <volume>2</volume>, <fpage>67</fpage>&#x02013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1093/scan/nsm006</pub-id><pub-id pub-id-type="pmid">18985121</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Scheff&#x000E9;</surname> <given-names>H.</given-names></name></person-group> (<year>1952</year>). <article-title>An analysis of variance for paired comparisons</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>47</volume>, <fpage>381</fpage>&#x02013;<lpage>400</lpage>.</citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schreuder</surname> <given-names>M.</given-names></name> <name><surname>Blankertz</surname> <given-names>B.</given-names></name> <name><surname>Tangermann</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). <article-title>A new auditory multi-class brain-computer interface paradigm: spatial hearing as an informative cue</article-title>. <source>PLoS ONE</source> <volume>5</volume>:<fpage>e9813</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0009813</pub-id><pub-id pub-id-type="pmid">20368976</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sellers</surname> <given-names>E. W. and Donchin, E.</given-names></name></person-group> (<year>2006</year>). <article-title>A P300-based brain-computer interface: initial tests by ALS patients</article-title>. <source>Clin. Neurophysiol.</source> <volume>117</volume>, <fpage>538</fpage>&#x02013;<lpage>548</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2005.06.027</pub-id><pub-id pub-id-type="pmid">16461003</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simon</surname> <given-names>N.</given-names></name> <name><surname>K&#x000E4;thner</surname> <given-names>I.</given-names></name> <name><surname>Ruf</surname> <given-names>C. A.</given-names></name> <name><surname>Pasqualotto</surname> <given-names>E.</given-names></name> <name><surname>K&#x000FC;bler</surname> <given-names>A.</given-names></name> <name><surname>Halder</surname> <given-names>S.</given-names></name></person-group> (<year>2014</year>). <article-title>An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<fpage>1039</fpage>. <pub-id pub-id-type="doi">10.3389/fnhum.2014.01039</pub-id><pub-id pub-id-type="pmid">25620924</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Steinbeis</surname> <given-names>N.</given-names></name> <name><surname>Koelsch</surname> <given-names>S.</given-names></name> <name><surname>Sloboda</surname> <given-names>J. A.</given-names></name></person-group> (<year>2006</year>). <article-title>The role of harmonic expectancy violations in musical emotions: evidence from subjective, physiological, and neural responses</article-title>. <source>J. Cogn. Neurosci.</source> <volume>18</volume>, <fpage>1380</fpage>&#x02013;<lpage>1393</lpage>. <pub-id pub-id-type="doi">10.1162/jocn.2006.18.8.1380</pub-id><pub-id pub-id-type="pmid">16859422</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tate</surname> <given-names>R. F.</given-names></name></person-group> (<year>1954</year>). <article-title>Correlation between a discrete and a continuous variable. Point-biserial correlation</article-title>. <source>Ann. Math. Stat.</source> <volume>25</volume>, <fpage>603</fpage>&#x02013;<lpage>607</lpage>.</citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Townsend</surname> <given-names>G.</given-names></name> <name><surname>LaPallo</surname> <given-names>B. K.</given-names></name> <name><surname>Boulay</surname> <given-names>C. B.</given-names></name> <name><surname>Krusienski</surname> <given-names>D. J.</given-names></name> <name><surname>Frye</surname> <given-names>G. E.</given-names></name> <name><surname>Hauser</surname> <given-names>C. K.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>A novel P300-based brain-computer interface stimulus presentation paradigm: moving beyond rows and columns</article-title>. <source>Clin. Neurophysiol.</source> <volume>121</volume>, <fpage>1109</fpage>&#x02013;<lpage>1120</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2010.01.030</pub-id><pub-id pub-id-type="pmid">20347387</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Treder</surname> <given-names>M. S.</given-names></name> <name><surname>Schmidt</surname> <given-names>N. M.</given-names></name> <name><surname>Blankertz</surname> <given-names>B.</given-names></name> <name><surname>Porbadnigk</surname> <given-names>A. K.</given-names></name> <name><surname>Treder</surname> <given-names>M. S.</given-names></name> <name><surname>Blankertz</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification</article-title>. <source>J. Neural Eng.</source> <volume>11</volume>:<fpage>026009</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/11/2/026009</pub-id><pub-id pub-id-type="pmid">24608228</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>F.</given-names></name> <name><surname>He</surname> <given-names>Y.</given-names></name> <name><surname>Pan</surname> <given-names>J.</given-names></name> <name><surname>Xie</surname> <given-names>Q.</given-names></name> <name><surname>Yu</surname> <given-names>R.</given-names></name> <name><surname>Zhang</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>A novel audiovisual brain-computer interface and its application in awareness detection</article-title>. <source>Sci. Rep.</source> <volume>5</volume>:<fpage>9962</fpage>. <pub-id pub-id-type="doi">10.1038/srep09962</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>Heetderks</surname> <given-names>W. J.</given-names></name> <name><surname>McFarland</surname> <given-names>D. J.</given-names></name> <name><surname>Peckham</surname> <given-names>P. H.</given-names></name> <name><surname>Schalk</surname> <given-names>G.</given-names></name> <etal/></person-group>. (<year>2000</year>). <article-title>Brain-computer interface technology: a review of the first international meeting</article-title>. <source>IEEE Trans. Rehabil. Eng.</source> <volume>8</volume>, <fpage>164</fpage>&#x02013;<lpage>173</lpage>. <pub-id pub-id-type="doi">10.1109/TRE.2000.847807</pub-id><pub-id pub-id-type="pmid">10896178</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolpaw</surname> <given-names>J. R.</given-names></name> <name><surname>Birbaumer</surname> <given-names>N.</given-names></name> <name><surname>McFarland</surname> <given-names>D. J.</given-names></name> <name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Vaughan</surname> <given-names>T. M.</given-names></name></person-group> (<year>2002</year>). <article-title>Brain-computer interfaces for communication and control</article-title>. <source>Clin. Neurophysiol.</source> <volume>113</volume>, <fpage>767</fpage>&#x02013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1016/S1388-2457(02)00057-3</pub-id><pub-id pub-id-type="pmid">12048038</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>Q.</given-names></name> <name><surname>Jin</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Cichocki</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>A novel BCI based on ERP components sensitive to configural processing of human faces</article-title>. <source>J. Neural Eng.</source> <volume>9</volume>:<fpage>026018</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/9/2/026018</pub-id><pub-id pub-id-type="pmid">22414683</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>Q.</given-names></name> <name><surname>Onishi</surname> <given-names>A.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Cao</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Cichocki</surname> <given-names>A.</given-names></name></person-group> (<year>2011</year>). <article-title>A novel oddball paradigm for affective BCIs using emotional faces as stimuli</article-title>. <source>Lecture Notes Comput. Sci.</source> <volume>7062</volume>, <fpage>279</fpage>&#x02013;<lpage>286</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-24955-6_34</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zink</surname> <given-names>R.</given-names></name> <name><surname>Hunyadi</surname> <given-names>B.</given-names></name> <name><surname>Huffel</surname> <given-names>S. V.</given-names></name> <name><surname>Vos</surname> <given-names>M. D.</given-names></name></person-group> (<year>2016</year>). <article-title>Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase</article-title>. <source>J. Neural Eng.</source> <volume>13</volume>:<fpage>026005</fpage>. <pub-id pub-id-type="doi">10.1088/1741-2560/13/2/026005</pub-id><pub-id pub-id-type="pmid">26824883</pub-id></citation></ref>
</ref-list>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This study was supported in part by JSPS KAKENHI grants (16K16477, 18K17667).</p>
</fn>
</fn-group>
</back>
</article>