<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2017.00093</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>High-Resolution Audio with Inaudible High-Frequency Components Induces a Relaxed Attentional State without Conscious Awareness</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Kuribayashi</surname> <given-names>Ryuma</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/118870/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Nittono</surname> <given-names>Hiroshi</given-names></name>
<xref ref-type="author-notes" rid="fn001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2712/overview"/>
</contrib>
</contrib-group>
<aff><institution>Graduate School of Human Sciences, Osaka University</institution> <country>Osaka, Japan</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: <italic>Mark Reybrouck, KU Leuven, Belgium</italic></p></fn>
<fn fn-type="edited-by"><p>Reviewed by: <italic>Lutz J&#x00E4;ncke, University of Zurich, Switzerland; Robert J. Barry, University of Wollongong, Australia</italic></p></fn>
<fn fn-type="corresp" id="fn001"><p>&#x002A;Correspondence: <italic>Hiroshi Nittono, <email>nittono@hus.osaka-u.ac.jp</email></italic></p></fn>
<fn fn-type="other" id="fn002"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>01</day>
<month>02</month>
<year>2017</year>
</pub-date>
<pub-date pub-type="collection">
<year>2017</year>
</pub-date>
<volume>8</volume>
<elocation-id>93</elocation-id>
<history>
<date date-type="received">
<day>01</day>
<month>10</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>01</month>
<year>2017</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2017 Kuribayashi and Nittono.</copyright-statement>
<copyright-year>2017</copyright-year>
<copyright-holder>Kuribayashi and Nittono</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>High-resolution audio has a higher sampling frequency and a greater bit depth than conventional low-resolution audio such as compact disks. The higher sampling frequency enables inaudible sound components (above 20 kHz) that are cut off in low-resolution audio to be reproduced. Previous studies of high-resolution audio have mainly focused on the effect of such high-frequency components. It is known that alpha-band power in a human electroencephalogram (EEG) is larger when the inaudible high-frequency components are present than when they are absent. Traditionally, alpha-band EEG activity has been associated with arousal level. However, no previous studies have explored whether sound sources with high-frequency components affect the arousal level of listeners. The present study examined this possibility by having 22 participants listen to two types of a 400-s musical excerpt of <italic>French Suite No. 5</italic> by J. S. Bach (on cembalo, 24-bit quantization, 192 kHz A/D sampling), with or without inaudible high-frequency components, while performing a visual vigilance task. High-alpha (10.5&#x2013;13 Hz) and low-beta (13&#x2013;20 Hz) EEG powers were larger for the excerpt with high-frequency components than for the excerpt without them. Reaction times and error rates did not change during the task and were not different between the excerpts. The amplitude of the P3 component elicited by target stimuli in the vigilance task increased in the second half of the listening period for the excerpt with high-frequency components, whereas no such P3 amplitude change was observed for the other excerpt without them. The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt. The present study shows that high-resolution audio that retains high-frequency components has an advantage over similar and indistinguishable digital sound sources in which such components are artificially cut off, suggesting that high-resolution audio with inaudible high-frequency components induces a relaxed attentional state without conscious awareness.</p>
</abstract>
<kwd-group>
<kwd>high-resolution audio</kwd>
<kwd>electroencephalogram</kwd>
<kwd>alpha power</kwd>
<kwd>event-related potential</kwd>
<kwd>vigilance task</kwd>
<kwd>attention</kwd>
<kwd>conscious awareness</kwd>
<kwd>hypersonic effect</kwd>
</kwd-group>
<contract-num rid="cn001">15J06118</contract-num>
<contract-sponsor id="cn001">Japan Society for the Promotion of Science<named-content content-type="fundref-id">10.13039/501100001691</named-content></contract-sponsor>
<counts>
<fig-count count="5"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="67"/>
<page-count count="12"/>
<word-count count="0"/>
</counts>
</article-meta>
</front>
<body>
<sec><title>Introduction</title>
<p>High-resolution audio has recently emerged in the digital music market due to recent advances in information and communications technologies. Because of a higher sampling frequency and a greater bit depth than conventional low-resolution audio such as compact disks (CDs), it provides a closer replication of the real analog sound waves. Sampling frequency means the number of samples per second taken from a sound source through analog-to-digital conversion. Bit depth is the number of possible values in each sample and expressed as a power of two. A higher sampling frequency makes the digitization of sound more accurate in the time-frequency domain, whereas a greater bit depth increases the resolution of the sound. What kind of advantage does the latest digital audio have for human beings? This question has not been sufficiently discussed. The present investigation used physiological, behavioral, and subjective measures to provide evidence that high-resolution audio affects human psychophysiological state without conscious awareness.</p>
<p>The higher sampling frequency enables higher frequency sound components to be reproduced, because one-half of the sampling frequency defines the upper limit of reproducible frequencies (as dictated by the Nyquist&#x2013;Shannon sampling theorem). However, in conventional digital audio, sampling frequency is usually restrained so that sounds above 20 kHz are cut off in order to reduce file sizes for convenience. This reduction is based on the knowledge that sounds above 20 kHz do not influence sound quality ratings (<xref ref-type="bibr" rid="B43">Muraoka et al., 1981</xref>) and do not appear to produce evoked brain magnetic field responses (<xref ref-type="bibr" rid="B19">Fujioka et al., 2002</xref>).</p>
<p>In contrast to this conventional digital recording process in which inaudible high-frequency components are cut off, high-resolution music that retains such components has been repeatedly shown to affect human electroencephalographic (EEG) activity (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>, <xref ref-type="bibr" rid="B46">2006</xref>; <xref ref-type="bibr" rid="B65">Yagi et al., 2003a</xref>; <xref ref-type="bibr" rid="B20">Fukushima et al., 2014</xref>; <xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>; <xref ref-type="bibr" rid="B23">Ito et al., 2016</xref>). This effect is often called &#x201C;hypersonic&#x201D; effect. In these studies, only the presence or absence of inaudible high-frequency components is manipulated while the sampling frequency and the bit depth are held constant. Interestingly, this effect appears with a considerable delay (i.e., 100&#x2013;200 s after the onset of music). However, it remains unclear what kind of psychological and cognitive states are associated with this effect. These studies also suggest that it is difficult to distinguish in a conscious sense between sounds with and without inaudible high-frequency components (full-range vs. high-cut). Some studies have shown that full-range audio is rated as better sound quality (e.g., a softer tone, more comfortable to the ears) than high-cut audio (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>; <xref ref-type="bibr" rid="B65">Yagi et al., 2003a</xref>). Another study has shown that participants are not able to distinguish between the two types of digital audio, with no significant differences found for subjective ratings of sound qualities (<xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>). The feasibility of discrimination seems to depend on the kinds of audio sources and individuals (<xref ref-type="bibr" rid="B44">Nishiguchi et al., 2009</xref>). Regarding behavioral aspects, it has been shown that people listen to full-range sounds at a higher level of sound volume than high-cut sounds (<xref ref-type="bibr" rid="B66">Yagi et al., 2003b</xref>, <xref ref-type="bibr" rid="B67">2006</xref>; <xref ref-type="bibr" rid="B46">Oohashi et al., 2006</xref>).</p>
<p>Previous studies have examined the effect of inaudible high-frequency components on EEG activity while listening to music under resting conditions. It has been shown that EEG alpha-band (8&#x2013;13 Hz) frequency power is greater for high-resolution music with high-frequency components than for the same sound sources without them (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>, <xref ref-type="bibr" rid="B46">2006</xref>; <xref ref-type="bibr" rid="B65">Yagi et al., 2003a</xref>). The effect appears more clearly in a higher part of the conventional alpha-band frequency of 8&#x2013;13 Hz (10.5&#x2013;13 Hz: <xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>; 11&#x2013;13 Hz: <xref ref-type="bibr" rid="B23">Ito et al., 2016</xref>). <xref ref-type="bibr" rid="B23">Ito et al. (2016)</xref> reported that low beta-band (14&#x2013;20 Hz) EEG power also showed the same tendency to increase as high alpha-band EEG power.</p>
<p>A study using positron emission tomography (PET) revealed that the brainstem and thalamus areas were more activated when hearing full-range as compared with high-cut sounds (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>). Because such activation may support a role of the thalamus in emotional experience (<xref ref-type="bibr" rid="B37">LeDoux, 1993</xref>; <xref ref-type="bibr" rid="B62">Vogt and Gabriel, 1993</xref>; <xref ref-type="bibr" rid="B7">Blood and Zatorre, 2001</xref>; <xref ref-type="bibr" rid="B25">Jeffries et al., 2003</xref>; <xref ref-type="bibr" rid="B8">Brown et al., 2004</xref>; <xref ref-type="bibr" rid="B39">Lupien et al., 2009</xref>) and also in filtering or gating sensory input (<xref ref-type="bibr" rid="B2">Andreasen et al., 1994</xref>), <xref ref-type="bibr" rid="B47">Oohashi et al. (2000)</xref> speculated that the presence of inaudible high-frequency components may affect the perception of sounds and some aspects of human behavior.</p>
<p>Another line of research suggests a link between cognitive function and alpha-band as well as beta-band EEG activities. Alpha-band EEG activity is thought to be associated not only with arousal and vigilance levels (<xref ref-type="bibr" rid="B3">Barry et al., 2007</xref>) but also with cognitive tasks involving perception, working memory, long-term memory, and attention (e.g., <xref ref-type="bibr" rid="B5">Basar, 1999</xref>; <xref ref-type="bibr" rid="B28">Klimesch, 1999</xref>; <xref ref-type="bibr" rid="B63">Ward, 2003</xref>; <xref ref-type="bibr" rid="B30">Klimesch et al., 2005</xref>). Higher alpha-band activity is considered to inhibit task-irrelevant brain regions so as to serve effective disengagement for optimal processing (<xref ref-type="bibr" rid="B26">Jensen and Mazaheri, 2010</xref>; <xref ref-type="bibr" rid="B16">Foxe and Snyder, 2011</xref>; <xref ref-type="bibr" rid="B64">Weisz et al., 2011</xref>; <xref ref-type="bibr" rid="B29">Klimesch, 2012</xref>; <xref ref-type="bibr" rid="B14">De Blasio et al., 2013</xref>).</p>
<p>Beta-band power is broadly thought to be associated with motor function when it is derived from motor areas (<xref ref-type="bibr" rid="B22">Hari and Salmelin, 1997</xref>; <xref ref-type="bibr" rid="B12">Crone et al., 1998</xref>; <xref ref-type="bibr" rid="B50">Pfurtscheller and Lopes da Silva, 1999</xref>; <xref ref-type="bibr" rid="B51">Pfurtscheller et al., 2003</xref>). Moreover, beta power has been shown to increase with corresponding increases in arousal and vigilance levels, which indicates that participants get engaged in a task (e.g., <xref ref-type="bibr" rid="B56">Sebastiani et al., 2003</xref>; <xref ref-type="bibr" rid="B1">Aftanas et al., 2006</xref>; <xref ref-type="bibr" rid="B21">Gola et al., 2012</xref>; <xref ref-type="bibr" rid="B27">Kami&#x00F1;ski et al., 2012</xref>).</p>
<p>What kind of advantage does high-resolution audio with inaudible high-frequency components have for human beings? What remains unclear is what kind of psychophysiological states high-resolution audio induces, along with the corresponding increase in alpha- and beta-band EEG activities. To monitor listeners&#x2019; arousal level, we asked participants to listen to a musical piece while performing a visual vigilance task that required sustained attention in order to continuously respond to specific stimuli. Two types of high-resolution audio of the same musical piece were presented using a double-blind method: With or without inaudible high-frequency components.</p>
<p>EEG was recorded along with other psychophysiological measures: Heart rate (HR), heart rate variability (HRV), and facial electromyograms (EMGs). The former two measures index autonomic nervous system activities. HRV contains two components with different frequency bands: High frequency (HF; 0.15-0.4 Hz), and low frequency (LF; 0.04-0.15 Hz). HF and LF activities are mediated by vagal and vagosympathetic activations, respectively (<xref ref-type="bibr" rid="B42">Malliani et al., 1991</xref>; <xref ref-type="bibr" rid="B41">Malliani and Montano, 2002</xref>). The LF/HF power ratio is sometimes used as an index parameter that shows the sympathetic activities. Facial EMGs in the regions of the corrugator supercilii and the zygomaticus major muscles have been used as indices of negative and positive affects, respectively (<xref ref-type="bibr" rid="B36">Larsen et al., 2003</xref>). Decrements in vigilance task performance such as longer reaction times (RTs) and higher error rates are interpreted as reflecting the decrease in arousal level, which is also reflected in the electrical activity of the brain (<xref ref-type="bibr" rid="B18">Fruhstorfer and Bergstr&#x00F6;m, 1969</xref>). Besides ongoing EEG activity, event-related potentials (ERPs) are associated with vigilance task performance. When vigilance task performance decreases, the amplitude of P3, a positive ERP component observed dominantly at parietal recording sites between 300 and 600 ms after stimulus onset, decreases and its latency increases (<xref ref-type="bibr" rid="B18">Fruhstorfer and Bergstr&#x00F6;m, 1969</xref>; <xref ref-type="bibr" rid="B13">Davies and Parasuraman, 1977</xref>; <xref ref-type="bibr" rid="B48">Parasuraman, 1983</xref>). The P3 outcomes are thought to be modulated not only by overall arousal level but also by attentional resource allocation (<xref ref-type="bibr" rid="B52">Polich, 2007</xref>). P3 amplitude has been shown to be larger when greater attentional resources are allocated to the eliciting stimulus. It is thus thought that P3 amplitude can serve as a measure of processing capacity and mental workload (<xref ref-type="bibr" rid="B31">Kok, 1997</xref>, <xref ref-type="bibr" rid="B32">2001</xref>).</p>
<p>In the present study, physiological, behavioral, and subjective measures were recorded to examine what kind of advantage high-resolution audio with inaudible high-frequency components has. Specifically, we were interested in how the increase in alpha- and beta-band EEG activities is associated with listeners&#x2019; arousal and vigilance level. Using a double-blind method, two types of high-resolution audio of the same musical piece (with or without inaudible high-frequency components) were presented while participants performed a vigilance task in the visual modality.</p>
</sec>
<sec id="s1" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec><title>Participants</title>
<p>Twenty-six student volunteers at Hiroshima University gave their informed consent and participated in the study. Four participants had to be excluded due to technical problems. The remaining 22 participants (14 women, 18&#x2013;24 years, <italic>M</italic> = 20.6 years) did not report any known neurological dysfunction or hearing deficit. They were right-handed according to the Edinburgh Inventory (<italic>M</italic> = 84.1 &#x00B1; 12.8). All reported to have correct or corrected-to-normal vision. Eight participants had the experience of learning musical instruments for a few years, but none of them were professional musicians. The Research Ethics Committee of the Graduate School of Integrated Arts and Sciences in Hiroshima University approved the experimental protocol.</p>
</sec>
<sec><title>Stimuli and Task</title>
<p>The present study used the same materials that were used in <xref ref-type="bibr" rid="B35">Kuribayashi et al. (2014)</xref>. The first 200-s portion of <italic>French Suite No. 5</italic> by J. S. Bach (on cembalo, 24-bit quantization, 192 kHz A/D sampling) was selected. In the present study, this portion was played twice to produce a 400-s excerpt. The original (full-range) excerpt is rich in high-frequency components. A high-cut version of the excerpt was produced by removing such components using a low-pass finite impulse response digital filter with a very steep slope (cutoff = 20 kHz, slope = &#x2013;1,673 dB/oct). This linear-phase filter does not cause any phase distortion. Although the filter produces very small ripples (1.04<sup>&#x2217;</sup>10<sup>-2</sup> dB), they are negligible and it is unlikely to affect auditory perception. Sounds were amplified using AI-501DA (TEAC Corporation, Tokyo, Japan) controlled by dedicated software on a laptop PC. Two loudspeakers with high-frequency tweeters (PM1; Bowers &#x0026; Wilkins, Worthing, England) were located 1.5 m diagonally forward from the listening position. The sound pressure level was set at approximately 70 dB (A). Calibration measurements at the listening position ensured that the full-range excerpt contained abundant high-frequency components and that the high-frequency power of the high-cut excerpt (i.e., components over 20 kHz) did not differ from that of background noise. The average power spectra of the excerpts are available at <ext-link ext-link-type="uri" xlink:href="http://links.lww.com/WNR/A279">http://links.lww.com/WNR/A279</ext-link> as Supplemental Digital Content of <xref ref-type="bibr" rid="B35">Kuribayashi et al. (2014)</xref>.</p>
<p>An equiprobable visual Go/NoGo task was conducted using a cathode ray tube (CRT) computer monitor (refresh rate = 100 Hz) in front of participants. A block consisted of 120 visual stimuli: 60 targets (either &#x2018;T&#x2019; or &#x2018;V&#x2019;, 30 each) and 60 non-targets (&#x2018;O&#x2019;) in a randomized order. The visual stimuli were 200 ms in duration and presented with a mean stimulus onset asynchrony (SOA) of 5 s (range = 3-7 s). Button-press responses with the left and right index fingers were required to &#x2018;T&#x2019; and &#x2018;V&#x2019; (or &#x2018;V&#x2019; and &#x2018;T&#x2019;) respectively, as quickly and accurately as possible.</p>
</sec>
<sec><title>Procedure</title>
<p>The study was conducted using a double-blind method. Participants listened to two versions of the 400-s musical excerpt (with or without high-frequency components) while performing the Go/NoGo task. Participants also performed the task under silent conditions for 100 s before and after music presentation (pre- and post-music periods). The presentation order of the two excerpts was counterbalanced across the participants. EEG, HR, and facial EMGs were recorded during task performance. After listening to each excerpt, participants completed a sound quality questionnaire consisting of 10 pairs of adjectives and then reported their mood states on the Affect Grid (<xref ref-type="bibr" rid="B53">Russell et al., 1989</xref>) and multiple mood scales (<xref ref-type="bibr" rid="B59">Terasaki et al., 1992</xref>). At the end of the experiment, participants judged which excerpt contained high-frequency components by making a binary choice between them.</p>
</sec>
<sec><title>Physiological Recording</title>
<p>Psychophysiological measures were recorded with a sampling rate of 1000 Hz using QuickAmp (Brain Products, Gilching, Germany). Filter bandpass was DC to 200 Hz. EEG was recorded from 34 scalp electrodes (Fp1/2, Fz, F3/4, F7/8, FC1/2, FC5/6, FT9/10, Cz, T7/8, C3/4, CP1/2, CP5/6, TP9/10, Pz, P3/4, P7/8, PO9/10, Oz, O1/2) according to the extended 10&#x2013;20 system. Four additional electrodes (supra-orbital and infra-orbital ridges of the right eye and outer canthi) were used to monitor eye movements and blinks. EEG data were recorded using the average reference online and re-referenced to the digitally linked earlobes (A1&#x2013;A2) offline. EEG data were resampled at 250 Hz and were filtered offline (1&#x2013;60 Hz band pass, 24 dB/oct for EEG analysis; 0.1&#x2013;60 Hz band pass, 24 dB/oct for ERP analysis). Ocular artifacts were corrected using a semi-automatic independent component analysis method implemented on Brain Vision Analyzer 2.04 (Brain Products). The components that were easily identifiable as artifacts related to blinks and eye movements were removed.</p>
<p>Heart rate was measured by recording electrocardiograms from the left ankle and the right hand. The R&#x2013;R intervals were calculated and converted into HR in bpm. For facial EMGs, electrical activities over the zygomaticus major and corrugator supercilii regions were recorded using bipolar electrodes affixed above the left brow and on the left cheek, respectively (<xref ref-type="bibr" rid="B17">Fridlund and Cacioppo, 1986</xref>). The EMG data were filtered offline (15 Hz high-pass, 12 dB/oct) and fully rectified (<xref ref-type="bibr" rid="B36">Larsen et al., 2003</xref>).</p>
</sec>
<sec><title>Data Reduction and Statistical Analysis</title>
<p>A total of 600 s (including silent periods) was divided into six 100-s epochs. For EEG analysis, each 100-s epoch was divided into 97 2.048-s segments with 1.024 s overlap. Power spectrum was calculated by Fast Fourier Transform with a Hanning window. The total powers (&#x03BC;V<sup>2</sup>) of the following frequency bands were calculated: Delta (1&#x2013;4 Hz), theta (4&#x2013;8 Hz), low-alpha (8&#x2013;10.5 Hz), high-alpha (10.5&#x2013;13 Hz), low-beta (13&#x2013;20 Hz), high-beta (20&#x2013;30 Hz), and gamma (36&#x2013;44 Hz). The square root of the total power (&#x03BC;V) was used for statistical analysis, following the procedure of previous studies (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>, <xref ref-type="bibr" rid="B46">2006</xref>; <xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>). The scalp electrode sites were grouped into four regions: Anterior Left (AL: Fp1, F3, F7, FC1, FC5, FT9), Anterior Right (AR: Fp2, F4, F8, FC2, FC6, FT10), Posterior Left (PL: CP1, CP5, TP9, P3, P7, PO9, O1), and Posterior Right (PR: CP2, CP6, TP10, P4, P8, PO10, O2). For RT, EMG, and HR, the mean values of each 100-s epoch were calculated. Mean RT and EMG values were log-transformed before statistical analysis.</p>
<p>Heart rate variability analysis was done by using Kubios HRV 2.2 (<xref ref-type="bibr" rid="B58">Tarvainen et al., 2014</xref>). The last 300-s (5-min) epoch of the 400-s listening period was selected according to previously established guidelines (<xref ref-type="bibr" rid="B6">Berntson et al., 1997</xref>). Prior to spectrum estimation, the R&#x2013;R interval series is converted to equidistantly sampled series via piecewise cubic spline interpolation. The spectrum is estimated using an autoregressive modeling based method. The total powers (ms<sup>2</sup>) were calculated for LF (0.04&#x2013;0.15 Hz) and HF (0.15&#x2013;0.4 Hz) bands, and LF/HF power ratio was obtained. The square roots of LF and HF (ms) were used for statistical analysis.</p>
<p>For ERP analysis, the total 400-s listening period was divided into two 200-s epochs, to secure a reasonable number of Go trials (around 20). Silent periods (pre- and post-music epoch) were not included in the calculation. Those trials found to have Go omissions, Go misses (incorrect hand response to &#x2018;T&#x2019; or &#x2018;V&#x2019;), or NoGo responses (commission errors) were excluded from further processing steps. Go and NoGo responses were separately averaged to produce ERPs. Epochs (200 ms before stimulus presentation until 1000 ms after the presentation) were baseline corrected (-200 ms until 0 ms). The peak of a P3 wave was identified within a latency range of 350-500 ms at Pz where P3 amplitude is dominant topographically.</p>
<p>Each measure was subjected to a repeated measures analysis of variance (ANOVA) with sound type (full-range vs. high-cut) and epoch (pre-music, 0-100, 100-200, 200-300, 300-400 s, and post-music for EEG data; 0-200 and 200-400 s for ERP data) as factors. To compensate for possible type I error inflation by the violation of sphericity, multivariate ANOVA solutions are reported (<xref ref-type="bibr" rid="B61">Vasey and Thayer, 1987</xref>). The significance level was set at 0.05. For <italic>post hoc</italic> multiple comparisons of means, the comparison-wise level of significance was determined by the Bonferroni method.</p>
</sec>
</sec>
<sec><title>Results</title>
<sec><title>EEG Measures</title>
<p><bold>Figure <xref ref-type="fig" rid="F1">1</xref></bold> shows the EEG amplitude spectrogram for the four regions in the silent conditions (pre- and post-music periods). Although participants were performing a visual vigilance task with eyes opened, a peak around 10 Hz appears clearly. The amplitude of the peak appears to be increased after listening to music, in particular after listening to the full-range version of the musical piece.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><bold>EEG amplitude spectrogram for the full-range and high-cut conditions in the silent periods (100-s epochs before and after listening to music).</bold> The amplitude of the peak around 10 Hz was increased after listening to music, in particular after listening to the full-range version of the sound source that contains high-frequency components.</p></caption>
<graphic xlink:href="fpsyg-08-00093-g001.tif"/>
</fig>
<p><bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold> shows the time course and scalp topography of high-alpha EEG (10.5&#x2013;13 Hz) and low-beta EEG (13&#x2013;20 Hz) bands. For EEG measures, a Sound Type &#x00D7; Epoch &#x00D7; Anterior-Posterior &#x00D7; Hemisphere ANOVA was conducted for each frequency band. Significant effects of sound type were found for both bands. For other frequency bands, only the theta EEG band (4-8 Hz) power showed a significant Sound Type &#x00D7; Anterior-Posterior &#x00D7; Hemisphere interaction, <italic>F</italic>(1,21) = 5.37, <italic>p</italic> = 0.031, <inline-formula><mml:math id="M1"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.20. However, no significant simple main effects were found.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p><bold>High-alpha (10.5&#x2013;13 Hz) and low-beta (13&#x2013;20 Hz) EEG powers for the full-range and high-cut conditions.</bold> <bold>(A)</bold> Time course of the square root of EEG powers over four scalp regions. Error bars show SEs. No music was played in the pre- and post-music epochs. <bold>(B,C)</bold> Scalp topography of the high-alpha and low-beta EEG powers in the pre-music, 300&#x2013;400 s (i.e., last quarter of the listening period), and post-music epochs. Left: top view. Right: back view.</p></caption>
<graphic xlink:href="fpsyg-08-00093-g002.tif"/>
</fig>
<p>For high-alpha EEG band, the Sound Type &#x00D7; Epoch &#x00D7; Hemisphere interaction was significant, <italic>F</italic>(5,17) = 7.06, <italic>p</italic> = 0.001, <inline-formula><mml:math id="M2"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.67. Separate ANOVAs for each epoch revealed a significant Sound Type &#x00D7; Hemisphere interaction at the 200-300-s epoch, <italic>F</italic>(1,21) = 12.63, <italic>p</italic> = 0.002, <inline-formula><mml:math id="M3"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.38, and a significant effect of sound type at the post-music period, <italic>F</italic>(1,21) = 6.99, <italic>p</italic> = 0.015, <inline-formula><mml:math id="M4"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.25. <italic>Post hoc</italic> tests revealed that high-alpha EEG power was greater for the full-range excerpt than for the high-cut excerpt and that the sound type effect was found for the left but not right hemisphere at the 200-300-s epoch. No effects of sound type were obtained at the epochs before 200 s. The main effect of anterior-posterior was also significant, <italic>F</italic>(1,21) = 15.67, <italic>p</italic> = 0.001, <inline-formula><mml:math id="M5"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.43, showing that the high-alpha EEG was dominant over posterior scalp sites.</p>
<p>For low-beta EEG band, the Sound Type &#x00D7; Anterior-Posterior &#x00D7; Hemisphere interaction and the main effect of sound type effect were significant, <italic>F</italic>(1,21) = 4.49, <italic>p</italic> = 0.046, <inline-formula><mml:math id="M6"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.18; <italic>F</italic>(1,21) = 5.43, <italic>p</italic> = 0.030, <inline-formula><mml:math id="M7"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.21. Low-beta EEG power was greater in the full-range condition than in the high-cut condition. Separate ANOVAs for anterior-posterior and hemisphere also revealed significant effects of sound type, for posterior region: <italic>F</italic>(1,21) = 7.07, <italic>p</italic> = 0.015, <inline-formula><mml:math id="M8"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.25; for left hemisphere: <italic>F</italic>(1,21) = 5.26, <italic>p</italic> = 0.032, <inline-formula><mml:math id="M9"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.20; for right hemisphere: <italic>F</italic>(1,21) = 5.27, <italic>p</italic> = 0.032, <inline-formula><mml:math id="M10"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.20; except for anterior region: <italic>F</italic>(1,21) = 3.94, <italic>p</italic> = 0.060, <inline-formula><mml:math id="M11"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.16. Although there were no significant interaction effects including epoch, <bold>Figure <xref ref-type="fig" rid="F2">2</xref></bold> shows that the difference between the full-range and high-cut excerpts seems to be more prominent at later epochs. Two-tailed <italic>t</italic>-tests revealed significant differences between the two excerpts at the 200-300-s, 300-400-s, and post epochs, <italic>t</italic>s(21) > 2.37, <italic>p</italic>s &#x003C; 0.027; <italic>p</italic> > 0.114 at the epochs before 200 s.</p>
</sec>
<sec><title>Grand Mean ERPs for the Visual Vigilance Task</title>
<p><bold>Figure <xref ref-type="fig" rid="F3">3</xref></bold> shows grand mean ERP waveforms and the scalp topography of the Go and NoGo P3 amplitudes. The mean number of averaged trials was 18.6 (range = 13-20). Although this is less than an optimal number of averages for P3 (<xref ref-type="bibr" rid="B10">Cohen and Polich, 1997</xref>), P3 peaks can be detected for all individual ERP waveforms. <bold>Table <xref ref-type="table" rid="T1">1</xref></bold> shows the mean amplitudes and latencies of the P3 peaks.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p><bold>Grand mean event-related potential (ERP) waveforms for Go and NoGo stimuli in the full-range and high-cut conditions.</bold> <bold>(A)</bold> Waveforms at Pz. <bold>(B)</bold> Scalp topography of P3 amplitudes in the 0&#x2013;200 s (i.e., first half) and 200&#x2013;400 s (i.e., second half) epochs of music listening. The mean amplitudes of 380&#x2013;440 ms after stimulus onset in the grand mean ERP waveforms are shown.</p></caption>
<graphic xlink:href="fpsyg-08-00093-g003.tif"/>
</fig>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>The peak amplitudes and latencies of the P3 component at Pz.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left" colspan="2"></td>
<th valign="top" align="center" colspan="2">High-cut</th>
<th valign="top" align="center" colspan="2">Full-range</th>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left"></td>
<td valign="top" align="left" colspan="2"><hr/></td>
<td valign="top" align="left" colspan="2"><hr/></td>
</tr>
<tr>
<td valign="top" align="left" colspan="2"></td>
<th valign="top" align="center">0&#x2013;200 s</th>
<th valign="top" align="center">200&#x2013;400 s</th>
<th valign="top" align="center">0&#x2013;200 s</th>
<th valign="top" align="center">200&#x2013;400 s</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Go</td>
<td valign="top" align="left">Amplitude (&#x03BC;V)</td>
<td valign="top" align="center">18.3 (6.4)</td>
<td valign="top" align="center">18.0 (5.7)</td>
<td valign="top" align="center">16.7 (6.3)</td>
<td valign="top" align="center">18.6 (6.3)</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left">Latency (ms)</td>
<td valign="top" align="center">404.9 (37.2)</td>
<td valign="top" align="center">419.5 (42.2)</td>
<td valign="top" align="center">405.1 (34.8)</td>
<td valign="top" align="center">420.2 (41.7)</td>
</tr>
<tr>
<td valign="top" align="left">No Go</td>
<td valign="top" align="left">Amplitude (&#x03BC;V)</td>
<td valign="top" align="center">16.3 (6.6)</td>
<td valign="top" align="center">13.1 (5.3)</td>
<td valign="top" align="center">15.1 (6.9)</td>
<td valign="top" align="center">14.0 (7.1)</td>
</tr>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="left">Latency (ms)</td>
<td valign="top" align="center">408.0 (35.0)</td>
<td valign="top" align="center">412.2 (35.7)</td>
<td valign="top" align="center">417.8 (41.2)</td>
<td valign="top" align="center">413.5 (27.9)</td>
</tr>
<tr>
<td valign="top" align="left"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<attrib><italic>Values in brackets are <italic>SD.</italic></italic></attrib>
</table-wrap-foot>
</table-wrap>
<p>For the P3 amplitude, a Sound Type &#x00D7; Epoch ANOVA was conducted for Go and NoGo stimulus conditions separately. A significant interaction was found for the Go condition, <italic>F</italic>(1,21) = 4.39, <italic>p</italic> = 0.049, <inline-formula><mml:math id="M12"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.17, but not for the NoGo condition, <italic>F</italic>(1,21) = 2.64, <italic>p</italic> = 0.119, <inline-formula><mml:math id="M13"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.11. <italic>Post hoc</italic> tests revealed that Go P3 amplitude increased from the 0-200 s to the 200-400 s epoch for the full-range excerpt, whereas Go P3 amplitude did not change for the high-cut excerpt. The main effect of epoch was significant for the NoGo condition, <italic>F</italic>(1,21) = 13.39, <italic>p</italic> = 0.001, <inline-formula><mml:math id="M14"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.39, showing that NoGo P3 amplitude decreased during the task for both musical excerpts.</p>
<p>Similar ANOVAs were conducted for latencies. No significant main or interaction effects of sound type were found. The main effect of epoch was significant for the Go stimulus condition, <italic>F</italic>(1,21) = 5.01, <italic>p</italic> = 0.036, <inline-formula><mml:math id="M15"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.19, showing that Go P3 latency increased through the task.</p>
<p>One of the reviewers questioned about the effects of sound type on the Nogo N2 (<xref ref-type="bibr" rid="B15">Falkenstein et al., 1999</xref>). We conducted a Sound Type &#x00D7; Epoch ANOVA on the amplitude of the Nogo N2 (Nogo minus Go in the 200&#x2013;300 ms period at Fz and Cz). No significant main or interaction effects were found.</p>
</sec>
<sec><title>Behavioral and Other Physiological Measures</title>
<p>Participants performed the vigilance task with considerable accuracy (high-cut: <italic>M</italic> = 98.6%, 95.8-100%; full-range: <italic>M</italic> = 97.9%, 95.0-99.2%). <bold>Figure <xref ref-type="fig" rid="F4">4</xref></bold> shows the time course of mean Go reaction times, HR, and facial EMGs (corrugator supercilii, zygomaticus major), and the HRV components for the last 300-s epoch of the musical excerpts. For the corrugator supercilii, a Sound Type &#x00D7; Epoch ANOVA showed a significant main effect of epoch, <italic>F</italic>(5,17) = 5.69, <italic>p</italic> = 0.003, <inline-formula><mml:math id="M16"><mml:msubsup><mml:mi mathvariant='normal' mathcolor='black'>&#x03b7;</mml:mi><mml:mi mathvariant='normal' mathcolor='black'>p</mml:mi><mml:mn mathvariant='normal' mathcolor='black'>2</mml:mn></mml:msubsup></mml:math></inline-formula> = 0.63. Corrugator activity increased over the course of the task. No significant main or interaction effects of sound type were found for RT or other physiological measures.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p><bold>(A)</bold> Time course of the mean Go reaction times, Heart rate (HR), and facial electromyograms (EMGs; corrugator supercilii and zygomaticus major) for the full-range and high-cut conditions. <bold>(B)</bold> Amplitudes and ratio of heart rate variability (HRV) components for the last 300-s epoch of musical excerpts. Error bars show SEs.</p></caption>
<graphic xlink:href="fpsyg-08-00093-g004.tif"/>
</fig>
</sec>
<sec><title>Subjective Ratings</title>
<p><bold>Table <xref ref-type="table" rid="T2">2</xref></bold> shows mean scores for participants&#x2019; mood states. A significant difference between the two types of musical excerpt was found only for inactive pleasantness scores, <italic>t</italic>(21) = 3.13, <italic>p</italic> = 0.005. Participants provided higher inactive pleasantness scores under the full-range than under the high-cut excerpt. <bold>Figure <xref ref-type="fig" rid="F5">5</xref></bold> shows the mean sound quality ratings for the full-range and high-cut musical excerpts. No significant differences were found between the two types of audio source for any adjective pairs, <italic>t</italic>s(21) &#x003C; 1.92, <italic>p</italic>s > 0.069. The correct rate of the forced choices was 41.0%, which did not exceed chance level (<italic>p</italic> = 0.523, binomial test).</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Mean scores and the results of two-tailed <italic>t</italic>-tests for participants&#x2019; mood states.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"></td>
<th valign="top" align="center">High-cut</th>
<th valign="top" align="center">Full-range</th>
<th valign="top" align="center"><italic>t</italic>(21)</th>
<th valign="top" align="center"><italic>p</italic></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Affect Grid (9-point scale, 1&#x2013;9)</td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Pleasantness</td>
<td valign="top" align="center">6.3 (1.4)</td>
<td valign="top" align="center">6.4 (1.0)</td>
<td valign="top" align="center">0.32</td>
<td valign="top" align="center">0.754</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Arousal</td>
<td valign="top" align="center">4.9 (2.3)</td>
<td valign="top" align="center">4.4 (2.1)</td>
<td valign="top" align="center">1.15</td>
<td valign="top" align="center">0.264</td>
</tr>
<tr>
<td valign="top" align="left">Multiple mood scale (4-point scale, 1&#x2013;4)</td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
<td valign="top" align="center"></td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Depression</td>
<td valign="top" align="center">1.5 (0.5)</td>
<td valign="top" align="center">1.5 (0.5)</td>
<td valign="top" align="center">0.00</td>
<td valign="top" align="center">1.00</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Aggression</td>
<td valign="top" align="center">1.1 (0.3)</td>
<td valign="top" align="center">1.1 (0.3)</td>
<td valign="top" align="center">0.40</td>
<td valign="top" align="center">0.690</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Fatigue</td>
<td valign="top" align="center">1.9 (0.6)</td>
<td valign="top" align="center">1.8 (0.5)</td>
<td valign="top" align="center">1.27</td>
<td valign="top" align="center">0.219</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Active pleasantness</td>
<td valign="top" align="center">1.8 (0.6)</td>
<td valign="top" align="center">1.8 (0.5)</td>
<td valign="top" align="center">0.09</td>
<td valign="top" align="center">0.929</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Inactive pleasantness</td>
<td valign="top" align="center">2.7 (0.6)</td>
<td valign="top" align="center">3.0 (0.6)</td>
<td valign="top" align="center">3.13</td>
<td valign="top" align="center">0.005</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Affinity</td>
<td valign="top" align="center">1.7 (0.6)</td>
<td valign="top" align="center">1.7 (0.7)</td>
<td valign="top" align="center">0.66</td>
<td valign="top" align="center">0.518</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Concentration</td>
<td valign="top" align="center">2.1 (0.5)</td>
<td valign="top" align="center">2.2 (0.6)</td>
<td valign="top" align="center">1.50</td>
<td valign="top" align="center">0.148</td>
</tr>
<tr>
<td valign="top" align="left">&#x00A0;&#x00A0;&#x00A0;&#x00A0;Startle</td>
<td valign="top" align="center">1.5 (0.5)</td>
<td valign="top" align="center">1.4 (0.5)</td>
<td valign="top" align="center">0.86</td>
<td valign="top" align="center">0.400</td>
</tr>
<tr>
<td valign="top" align="left"></td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<attrib><italic>Values in brackets are <italic>SD.</italic></italic></attrib>
</table-wrap-foot>
</table-wrap>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p><bold>Mean sound quality ratings for two musical excerpts with or without inaudible high-frequency components</bold>.</p></caption>
<graphic xlink:href="fpsyg-08-00093-g005.tif"/>
</fig>
</sec>
</sec>
<sec><title>Discussion</title>
<p>High-resolution audio with inaudible high-frequency components is a closer replication of real sounds than similar and indistinguishable sounds in which these components are artificially cut off. It remains unclear what kind of advantages high-resolution audio might have for human beings. Previous research in which participants listened to high-resolution music under resting conditions have shown that alpha and low-beta EEG powers were larger for an excerpt with high-frequency components as compared with an excerpt without them (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>, <xref ref-type="bibr" rid="B46">2006</xref>; <xref ref-type="bibr" rid="B65">Yagi et al., 2003a</xref>; <xref ref-type="bibr" rid="B20">Fukushima et al., 2014</xref>; <xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>; <xref ref-type="bibr" rid="B23">Ito et al., 2016</xref>). The present study asked participants to listen to two types of high-resolution audio of the same musical piece (with or without inaudible high-frequency components) while performing a vigilance task in the visual modality. Although the effect size is small, the overall results support the view that the effect of high-resolution audio with inaudible high-frequency components on brain activity reflects a relaxed attentional state without conscious awareness.</p>
<p>We found greater high-alpha (10.5&#x2013;13 Hz) and low-beta (13&#x2013;20 Hz) EEG powers for the excerpt with high-frequency components as compared with the excerpt without them. The effect appeared in the latter half of the listening period (200-400 s) and during the 100-s period after music presentation (post-music epoch). Furthermore, for full-range sounds compared with high-cut sounds, Go trial P3 amplitude increased, and subjective relaxation scores were greater. Because task performance did not change across musical excerpts, with no difference in self-reported arousal, the effects of high-resolution audio with inaudible high-frequency components on brain activities should not reflect a decrease of listeners&#x2019; arousal level. These findings show that listeners seem to experience a relaxed attentional state when listening to high-resolution audio with inaudible high-frequency components compared to similar sounds without these components.</p>
<p>It has been shown that listening to musical pieces increases EEG powers of theta, alpha, and beta bands (<xref ref-type="bibr" rid="B49">Pavlygina et al., 2004</xref>; <xref ref-type="bibr" rid="B24">J&#x00E4;ncke et al., 2015</xref>), and that the enhanced alpha-band power holds for approximately 100 s after listening (<xref ref-type="bibr" rid="B54">Sanyal et al., 2013</xref>). Therefore, high-resolution audio with inaudible high-frequency components would be advantageous compared to a similar digital audio in which these components are removed, in terms of the enhanced brain activity. <xref ref-type="bibr" rid="B33">Kuribayashi and Nittono (2014)</xref> have localized the intracerebral sources of this alpha EEG effect using standardized low-resolution brain electromagnetic tomography (sLORETA). The analysis revealed that the difference between full-range and high-cut sounds appeared in the right inferior temporal cortex, whereas the main source of the alpha-band activity was located in the parietal-occipital region. The finding that the alpha-band activity difference was obtained in specific but not whole regions is suggestive that this increase may reflect an activity related to task performance rather than a global arousal effect (<xref ref-type="bibr" rid="B3">Barry et al., 2007</xref>).</p>
<p>The present study shows that not only high-alpha and low-beta EEG powers but also P3 amplitude increased in the last half of the listening period (200-400 s). Alpha-band EEG activity and P3 amplitude have been shown to be positively correlated, in such a way that prestimulus alpha directly modulates positive potential amplitude in an auditory equiprobable Go/NoGo task (<xref ref-type="bibr" rid="B4">Barry et al., 2000</xref>; <xref ref-type="bibr" rid="B14">De Blasio et al., 2013</xref>). P3 amplitude is larger when greater attentional resources are allocated to the eliciting stimulus (<xref ref-type="bibr" rid="B31">Kok, 1997</xref>, <xref ref-type="bibr" rid="B32">2001</xref>; <xref ref-type="bibr" rid="B52">Polich, 2007</xref>). Alpha power is increased in tasks requiring a relaxed attentional state such as mindfulness and imagination of music (<xref ref-type="bibr" rid="B11">Cooper et al., 2006</xref>; <xref ref-type="bibr" rid="B55">Schaefer et al., 2011</xref>; <xref ref-type="bibr" rid="B38">Lomas et al., 2015</xref>). Increased alpha power is thought to be a signifier of enhanced processing, with attention focused on internally generated stimuli (<xref ref-type="bibr" rid="B38">Lomas et al., 2015</xref>). Beta power has been shown to increase when arousal and vigilance level increase (e.g., <xref ref-type="bibr" rid="B56">Sebastiani et al., 2003</xref>; <xref ref-type="bibr" rid="B1">Aftanas et al., 2006</xref>; <xref ref-type="bibr" rid="B21">Gola et al., 2012</xref>; <xref ref-type="bibr" rid="B27">Kami&#x00F1;ski et al., 2012</xref>). Taken together, the EEG and ERP results support the idea that listening to high-resolution audio with inaudible high-frequency components enhances the cortical activity related to the attention allocated to task-relevant stimuli. Although the effect was not observed in behavior, the gap between behavioral and EEG and ERP results is probably due to the ceiling effect of the vigilance task performance. Such a gap is often observed in other studies. For example, <xref ref-type="bibr" rid="B45">Okamoto and Nakagawa (2016)</xref> similarly reported that event-related synchronization in the alpha band during working memory task was increased 20&#x2013;30 min after the onset of the exposure to blue (short-wavelength) light, as compared with green (middle-wavelength) light, while task performance was high irrespective of light colors.</p>
<p>As a mechanism underlying the effect of inaudible high-frequency sound components, we speculate that the brain may subconsciously recognize high-resolution audio that retains high-frequency components as being more natural, as compared with similar sounds in which such components are artificially removed. A link between alpha power and ratings of &#x2018;naturalness&#x2019; of music has been reported. When listening to the same musical piece with different tempos, alpha-band EEG power increased for excerpts that were rated to be more natural, the ratings of which were not directly related to subjective arousal (<xref ref-type="bibr" rid="B40">Ma et al., 2012</xref>; <xref ref-type="bibr" rid="B60">Tian et al., 2013</xref>). As high-resolution audio replicates real sound waves more closely, it may sound more natural (at least on a subconscious level) and facilitate music-related psychophysiological responses.</p>
<p>Our findings have some limitations. First, because we used only a visual vigilance task, it is unclear whether high-resolution audio can improve performance on tasks that involve working memory and long-term memory. Because a vigilance task is relatively easy, our participants were able to sustain high performance. Other research using an n-back task requiring memory has shown that high-resolution audio also enhances task performance (<xref ref-type="bibr" rid="B57">Suzuki, 2013</xref>). Future research will benefit from using other tasks requiring various cognitive domains and processes.</p>
<p>Second, the underlying mechanism of how inaudible high-frequency components affect EEG activities cannot be revealed by the current data. It is noteworthy that presenting high-frequency components above 20 kHz alone did not produce any change in EEG activities (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>). Therefore, the combination of inaudible high-frequency components and audible low-frequency components should be a key factor that causes this phenomenon. A possible clue was obtained by a recent study of <xref ref-type="bibr" rid="B34">Kuribayashi and Nittono (2015)</xref>. Recording sound spectra of various musical instruments, they found that high-frequency components above 20 kHz appear abundantly during the rising period of a sound wave (i.e., from the silence to the maximal intensity, usually less than 0.1 s), but occur much less after that. Artificially cutting off the high-frequency components may cause a subtle distortion in this short period. It will take some time to accumulate these small, short-lasting differences until they produce discernible psychophysiological effects. This explanation is consistent with the fact that the effect of high-frequency components on EEG activities appears only after a 100&#x2013;200-s exposure to the music (<xref ref-type="bibr" rid="B47">Oohashi et al., 2000</xref>, <xref ref-type="bibr" rid="B46">2006</xref>; <xref ref-type="bibr" rid="B65">Yagi et al., 2003a</xref>; <xref ref-type="bibr" rid="B20">Fukushima et al., 2014</xref>; <xref ref-type="bibr" rid="B35">Kuribayashi et al., 2014</xref>; <xref ref-type="bibr" rid="B23">Ito et al., 2016</xref>).</p>
<p>Third, it remains unclear why there was a time lag until the effects of high-resolution audio on brain activity show up, and why this effect was maintained for 100 s after music stopped. A possible reason is that, as mentioned above, sufficiently long exposure is needed for the effects of inaudible high-frequency components. Another possibility is that listening to music has psychophysiological impact through the engagement of various neurochemical systems (<xref ref-type="bibr" rid="B9">Chanda and Levitin, 2013</xref>). Humoral effects are characterized by slow and durable responses, which might be underlying the lagged effect of high-resolution audio with inaudible high-frequency components. Although the present study did not reveal this effect on autonomic nervous system (HR and HRV) indices during music listening, participants reported greater relaxation scores after listening to high-resolution music with inaudible high-frequency components. It is a task for future research to determine the time course of the effect more precisely.</p>
<p>Fourth, the present study did not manipulate the sampling frequency and the bit depth of digital audio. High-resolution audio is characterized not only by the capability of reproducing inaudible high-frequency components but also by more accurate sampling and quantization (i.e., a higher sampling frequency and a greater bit depth) as compared with low-resolution audio. If the naturalness derived by a closer replication of real sounds affects EEG activities, the sampling frequency and the bit depth would do too regardless of whether the real sounds feature high-frequency components. This idea would be worth examining in future research.</p>
<p>In summary, high-resolution audio with inaudible high-frequency components has some advantages over similar and indistinguishable sounds in which these components are artificially cut off, such that the former type of digital audio induces a relaxed attentional state. Even without conscious awareness, a closer replication of real sounds in terms of frequency structure appears to bring out greater potential effects of music on human psychophysiological state and behavior.</p>
</sec>
<sec><title>Ethics Statement</title>
<p>This study was carried out in accordance with the recommendations of The Research Ethics Committee of the Graduate School of Integrated Arts and Sciences in Hiroshima University. All participants gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by The Research Ethics Committee of the Graduate School of Integrated Arts and Sciences in Hiroshima University.</p>
</sec>
<sec><title>Author Contributions</title>
<p>RK and HN planned the experiment, interpreted the data, and wrote the paper. RK collected and analyzed the data.</p>
</sec>
<sec><title>Conflict of Interest Statement</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> This work was supported by JSPS KAKENHI Grant Number 15J06118.</p>
</fn>
</fn-group>
<ack>
<p>The authors thank Ryuta Yamamoto, Katsuyuki Niyada, Kazushi Uemura, and Fujio Iwaki for their support as research coordinators. Hiroshima Innovation Center for Biomedical Engineering and Advanced Medicine offered the sound equipment.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aftanas</surname> <given-names>L. I.</given-names></name> <name><surname>Reva</surname> <given-names>N. V.</given-names></name> <name><surname>Savotina</surname> <given-names>L. N.</given-names></name> <name><surname>Makhnev</surname> <given-names>V. P.</given-names></name></person-group> (<year>2006</year>). <article-title>Neurophysiological correlates of induced discrete emotions in humans: an individually oriented analysis.</article-title> <source><italic>Neurosci. Behav. Physiol.</italic></source> <volume>36</volume> <fpage>119</fpage>&#x2013;<lpage>130</lpage>. <pub-id pub-id-type="doi">10.1007/s11055-005-0170-6</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andreasen</surname> <given-names>N. C.</given-names></name> <name><surname>Arndt</surname> <given-names>S.</given-names></name> <name><surname>Swayze</surname> <given-names>V.</given-names></name> <name><surname>Cizadlo</surname> <given-names>T.</given-names></name> <name><surname>Flaum</surname> <given-names>M.</given-names></name> <name><surname>O&#x2019;Leary</surname> <given-names>D.</given-names></name><etal/></person-group> (<year>1994</year>). <article-title>Thalamic abnormalities in schizophrenia visualized through magnetic resonance image averaging.</article-title> <source><italic>Science</italic></source> <volume>266</volume> <fpage>294</fpage>&#x2013;<lpage>298</lpage>. <pub-id pub-id-type="doi">10.1126/science.7939669</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barry</surname> <given-names>R. J.</given-names></name> <name><surname>Clarke</surname> <given-names>A. R.</given-names></name> <name><surname>Johnstone</surname> <given-names>S. J.</given-names></name> <name><surname>Magee</surname> <given-names>C. A.</given-names></name> <name><surname>Rushby</surname> <given-names>J. A.</given-names></name></person-group> (<year>2007</year>). <article-title>EEG differences between eyes-closed and eyes-open resting conditions.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>118</volume> <fpage>2765</fpage>&#x2013;<lpage>2773</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.07.028</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barry</surname> <given-names>R. J.</given-names></name> <name><surname>Kirkaikul</surname> <given-names>S.</given-names></name> <name><surname>Hodder</surname> <given-names>D.</given-names></name></person-group> (<year>2000</year>). <article-title>EEG alpha activity and the ERP to target stimuli in an auditory oddball paradigm.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>39</volume> <fpage>39</fpage>&#x2013;<lpage>50</lpage>. <pub-id pub-id-type="doi">10.1016/S0167-8760(00)00114-8</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Basar</surname> <given-names>E.</given-names></name></person-group> (<year>1999</year>). <source><italic>Brain Function and Oscillations: Integrative Brain Function. Neurophysiology and Cognitive Processes</italic></source>, <volume>Vol. II</volume>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer</publisher-name>, <pub-id pub-id-type="doi">10.1007/978-3-642-59893-7</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berntson</surname> <given-names>G. G.</given-names></name> <name><surname>Bigger</surname> <given-names>J. T.</given-names></name> <name><surname>Eckberg</surname> <given-names>D. L.</given-names></name> <name><surname>Grossman</surname> <given-names>P.</given-names></name> <name><surname>Kaufmann</surname> <given-names>P. G.</given-names></name> <name><surname>Malik</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>1997</year>). <article-title>Heart rate variability: origins, methods, and interpretive caveats.</article-title> <source><italic>Psychophysiology</italic></source> <volume>34</volume> <fpage>623</fpage>&#x2013;<lpage>648</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1997.tb02140.x</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blood</surname> <given-names>A. J.</given-names></name> <name><surname>Zatorre</surname> <given-names>R. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion.</article-title> <source><italic>Proc. Natl. Acad. Sci. U.S.A.</italic></source> <volume>98</volume> <fpage>11818</fpage>&#x2013;<lpage>11823</lpage>. <pub-id pub-id-type="doi">10.1073/pnas.191355898</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brown</surname> <given-names>S.</given-names></name> <name><surname>Martinez</surname> <given-names>M. J.</given-names></name> <name><surname>Parsons</surname> <given-names>L. M.</given-names></name></person-group> (<year>2004</year>). <article-title>Passive music listening spontaneously engages limbic and paralimbic systems.</article-title> <source><italic>Neuroreport</italic></source> <volume>15</volume> <fpage>2033</fpage>&#x2013;<lpage>2037</lpage>. <pub-id pub-id-type="doi">10.1097/00001756-200409150-00008</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chanda</surname> <given-names>M. L.</given-names></name> <name><surname>Levitin</surname> <given-names>D. J.</given-names></name></person-group> (<year>2013</year>). <article-title>The neurochemistry of music.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>17</volume> <fpage>179</fpage>&#x2013;<lpage>193</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2013.02.007</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>J.</given-names></name> <name><surname>Polich</surname> <given-names>J.</given-names></name></person-group> (<year>1997</year>). <article-title>On the number of trials needed for P300.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>25</volume> <fpage>249</fpage>&#x2013;<lpage>255</lpage>. <pub-id pub-id-type="doi">10.1016/S0167-8760(96)00743-X</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cooper</surname> <given-names>N. R.</given-names></name> <name><surname>Burgess</surname> <given-names>A. P.</given-names></name> <name><surname>Croft</surname> <given-names>R. J.</given-names></name> <name><surname>Gruzelier</surname> <given-names>J. H.</given-names></name></person-group> (<year>2006</year>). <article-title>Investigating evoked and induced electroencephalogram activity in task-related alpha power increases during an internally directed attention task.</article-title> <source><italic>Neuroreport</italic></source> <volume>17</volume> <fpage>205</fpage>&#x2013;<lpage>208</lpage>. <pub-id pub-id-type="doi">10.1097/01.wnr.0000198433.29389.54</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Crone</surname> <given-names>N. E.</given-names></name> <name><surname>Miglioretti</surname> <given-names>D. L.</given-names></name> <name><surname>Gordon</surname> <given-names>B.</given-names></name> <name><surname>Sieracki</surname> <given-names>J. M.</given-names></name> <name><surname>Wilson</surname> <given-names>M. T.</given-names></name> <name><surname>Uematsu</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>1998</year>). <article-title>Functional mapping of human sensorimotor cortex with electrocorticographic spectral analysis. I. Alpha and beta event-related desynchronization.</article-title> <source><italic>Brain</italic></source> <volume>121</volume> <fpage>2271</fpage>&#x2013;<lpage>2299</lpage>. <pub-id pub-id-type="doi">10.1093/brain/121.12.2271</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davies</surname> <given-names>D. R.</given-names></name> <name><surname>Parasuraman</surname> <given-names>R.</given-names></name></person-group> (<year>1977</year>). <article-title>&#x201C;Cortical evoked potentials and vigilance: a decision theory analysis,&#x201D; in</article-title> <source><italic>NATO Conference Series. Vigilance</italic></source> <volume>Vol. 3</volume> <role>ed.</role> <person-group person-group-type="editor"><name><surname>Mackie</surname> <given-names>R.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Plenum Press</publisher-name>), <fpage>285</fpage>&#x2013;<lpage>306</lpage>. <pub-id pub-id-type="doi">10.1007/978-1-4684-2529-1</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>De Blasio</surname> <given-names>F. M.</given-names></name> <name><surname>Barry</surname> <given-names>R. J.</given-names></name> <name><surname>Steiner</surname> <given-names>G. Z.</given-names></name></person-group> (<year>2013</year>). <article-title>Prestimulus EEG amplitude determinants of ERP responses in a habituation paradigm.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>89</volume> <fpage>444</fpage>&#x2013;<lpage>450</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2013.05.015</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Falkenstein</surname> <given-names>M.</given-names></name> <name><surname>Hoormann</surname> <given-names>J.</given-names></name> <name><surname>Hohnsbein</surname> <given-names>J.</given-names></name></person-group> (<year>1999</year>). <article-title>ERP components in Go/Nogo tasks and their relation to inhibition.</article-title> <source><italic>Acta Psychol. (Amst.)</italic></source> <volume>101</volume> <fpage>267</fpage>&#x2013;<lpage>291</lpage>. <pub-id pub-id-type="doi">10.1016/S0001-6918(99)00008-6</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Snyder</surname> <given-names>A. C.</given-names></name></person-group> (<year>2011</year>). <article-title>The role of alpha-band brain oscillations as a sensory suppression mechanism during selective attention.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>2</volume>:<issue>154</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00154</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fridlund</surname> <given-names>A. J.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name></person-group> (<year>1986</year>). <article-title>Guidelines for human electromyographic research.</article-title> <source><italic>Psychophysiology</italic></source> <volume>23</volume> <fpage>567</fpage>&#x2013;<lpage>589</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1986.tb00676.x</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fruhstorfer</surname> <given-names>H.</given-names></name> <name><surname>Bergstr&#x00F6;m</surname> <given-names>R. M.</given-names></name></person-group> (<year>1969</year>). <article-title>Human vigilance and auditory evoked responses.</article-title> <source><italic>Electroencephalogr. Clin. Neurophysiol.</italic></source> <volume>27</volume> <fpage>346</fpage>&#x2013;<lpage>355</lpage>. <pub-id pub-id-type="doi">10.1016/0013-4694(69)91443-6</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fujioka</surname> <given-names>T.</given-names></name> <name><surname>Kakigi</surname> <given-names>R.</given-names></name> <name><surname>Gunji</surname> <given-names>A.</given-names></name> <name><surname>Takeshima</surname> <given-names>Y.</given-names></name></person-group> (<year>2002</year>). <article-title>The auditory evoked magnetic fields to very high frequency tones.</article-title> <source><italic>Neuroscience</italic></source> <volume>112</volume> <fpage>367</fpage>&#x2013;<lpage>381</lpage>. <pub-id pub-id-type="doi">10.1016/S0306-4522(02)00086-6</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fukushima</surname> <given-names>A.</given-names></name> <name><surname>Yagi</surname> <given-names>R.</given-names></name> <name><surname>Kawai</surname> <given-names>N.</given-names></name> <name><surname>Honda</surname> <given-names>M.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Oohashi</surname> <given-names>T.</given-names></name></person-group> (<year>2014</year>). <article-title>Frequencies of inaudible high-frequency sounds differentially affect brain activity: positive and negative hypersonic effects.</article-title> <source><italic>PLoS ONE</italic></source> <volume>9</volume>:<issue>e95464</issue>. <pub-id pub-id-type="doi">10.1371/journal.pone.0095464</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gola</surname> <given-names>M.</given-names></name> <name><surname>Kami&#x0144;ski</surname> <given-names>J.</given-names></name> <name><surname>Brzezicka</surname> <given-names>A.</given-names></name> <name><surname>Wr&#x00F3;bel</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Beta band oscillations as a correlate of alertness &#x2014; Changes in aging.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>85</volume> <fpage>62</fpage>&#x2013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2011.09.00</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hari</surname> <given-names>R.</given-names></name> <name><surname>Salmelin</surname> <given-names>R.</given-names></name></person-group> (<year>1997</year>). <article-title>Human cortical oscillations: a neuromagnetic view through the skull.</article-title> <source><italic>Trends. Neurosci.</italic></source> <volume>20</volume> <fpage>44</fpage>&#x2013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1016/S0166-2236(96)10065-5</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ito</surname> <given-names>S.</given-names></name> <name><surname>Harada</surname> <given-names>T.</given-names></name> <name><surname>Miyaguchi</surname> <given-names>M.</given-names></name> <name><surname>Ishizaki</surname> <given-names>F.</given-names></name> <name><surname>Chikamura</surname> <given-names>C.</given-names></name> <name><surname>Kodama</surname> <given-names>Y.</given-names></name><etal/></person-group> (<year>2016</year>). <article-title>Effect of high-resolution audio music box sound on EEG.</article-title> <source><italic>Int. Med. J.</italic></source> <volume>23</volume> <fpage>1</fpage>&#x2013;<lpage>3</lpage>.</citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>J&#x00E4;ncke</surname> <given-names>L.</given-names></name> <name><surname>K&#x00FC;hnis</surname> <given-names>J.</given-names></name> <name><surname>Rogenmoser</surname> <given-names>L.</given-names></name> <name><surname>Elmer</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Time course of EEG oscillations during repeated listening of a well-known aria.</article-title> <source><italic>Front. Hum. Neurosci.</italic></source> <volume>9</volume>:<issue>401</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2015.00401</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeffries</surname> <given-names>K. J.</given-names></name> <name><surname>Fritz</surname> <given-names>J. B.</given-names></name> <name><surname>Braun</surname> <given-names>A. R.</given-names></name></person-group> (<year>2003</year>). <article-title>Words in melody: an h(2)15o pet study of brain activation during singing and speaking.</article-title> <source><italic>Neuroreport</italic></source> <volume>14</volume> <fpage>749</fpage>&#x2013;<lpage>754</lpage>. <pub-id pub-id-type="doi">10.1097/01.wnr.0000066198.94941.a4</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jensen</surname> <given-names>O.</given-names></name> <name><surname>Mazaheri</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>Shaping functional architecture by oscillatory alpha activity: gating by inhibition.</article-title> <source><italic>Front. Hum. Neurosci.</italic></source> <volume>4</volume>:<issue>186</issue>. <pub-id pub-id-type="doi">10.3389/fnhum.2010.00186</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kami&#x0144;ski</surname> <given-names>J.</given-names></name> <name><surname>Brzezicka</surname> <given-names>A.</given-names></name> <name><surname>Gola</surname> <given-names>M.</given-names></name> <name><surname>Wr&#x00F3;bel</surname> <given-names>A.</given-names></name></person-group> (<year>2012</year>). <article-title>Beta band oscillations engagement in human alertness process.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>85</volume> <fpage>125</fpage>&#x2013;<lpage>128</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2011.11.006</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klimesch</surname> <given-names>W.</given-names></name></person-group> (<year>1999</year>). <article-title>EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis.</article-title> <source><italic>Brain Res. Rev.</italic></source> <volume>29</volume> <fpage>169</fpage>&#x2013;<lpage>195</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0173(98)00056-3</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klimesch</surname> <given-names>W.</given-names></name></person-group> (<year>2012</year>). <article-title>Alpha-band oscillations, attention, and controlled access to stored information.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>16</volume> <fpage>606</fpage>&#x2013;<lpage>617</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2012.10.007</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klimesch</surname> <given-names>W.</given-names></name> <name><surname>Schack</surname> <given-names>B.</given-names></name> <name><surname>Sauseng</surname> <given-names>P.</given-names></name></person-group> (<year>2005</year>). <article-title>The functional significance of theta and upper alpha oscillations.</article-title> <source><italic>Exp. Psychol.</italic></source> <volume>52</volume> <fpage>99</fpage>&#x2013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.1027/1618-3169.52.2.99</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kok</surname> <given-names>A.</given-names></name></person-group> (<year>1997</year>). <article-title>Event-related-potential (ERP) reflections of mental resources: a review and synthesis.</article-title> <source><italic>Biol. Psychol.</italic></source> <volume>45</volume> <fpage>19</fpage>&#x2013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1016/S0301-0511(96)05221-0</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kok</surname> <given-names>A.</given-names></name></person-group> (<year>2001</year>). <article-title>On the utility of P3 amplitude as a measure of processing capacity.</article-title> <source><italic>Psychophysiology</italic></source> <volume>38</volume> <fpage>557</fpage>&#x2013;<lpage>577</lpage>. <pub-id pub-id-type="doi">10.1017/S0048577201990559</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuribayashi</surname> <given-names>R.</given-names></name> <name><surname>Nittono</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <article-title>Source localization of brain electrical activity while listening to high-resolution digital sounds with inaudible high-frequency components (Abstract).</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>94</volume>:<issue>192</issue>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2014.08.796</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuribayashi</surname> <given-names>R.</given-names></name> <name><surname>Nittono</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <article-title>Music instruments that produce sounds with inaudible high-frequency components (in Japanese).</article-title> <source><italic>Stud. Hum. Sci.</italic></source> <volume>10</volume> <fpage>35</fpage>&#x2013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.15027/39146</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kuribayashi</surname> <given-names>R.</given-names></name> <name><surname>Yamamoto</surname> <given-names>R.</given-names></name> <name><surname>Nittono</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <article-title>High-resolution music with inaudible high-frequency components produces a lagged effect on human electroencephalographic activities.</article-title> <source><italic>Neuroreport</italic></source> <volume>25</volume> <fpage>651</fpage>&#x2013;<lpage>655</lpage>. <pub-id pub-id-type="doi">10.1097/wnr.0000000000000151</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Larsen</surname> <given-names>J. T.</given-names></name> <name><surname>Norris</surname> <given-names>C. J.</given-names></name> <name><surname>Cacioppo</surname> <given-names>J. T.</given-names></name></person-group> (<year>2003</year>). <article-title>Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii.</article-title> <source><italic>Psychophysiology</italic></source> <volume>40</volume> <fpage>776</fpage>&#x2013;<lpage>785</lpage>. <pub-id pub-id-type="doi">10.1111/1469-8986.00078</pub-id></citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeDoux</surname> <given-names>J. E.</given-names></name></person-group> (<year>1993</year>). <article-title>Emotional memory systems in the brain.</article-title> <source><italic>Behav. Brain Res.</italic></source> <volume>58</volume> <fpage>69</fpage>&#x2013;<lpage>79</lpage>. <pub-id pub-id-type="doi">10.1016/0166-4328(93)90091-4</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lomas</surname> <given-names>T.</given-names></name> <name><surname>Ivtzan</surname> <given-names>I.</given-names></name> <name><surname>Fu</surname> <given-names>C. H.</given-names></name></person-group> (<year>2015</year>). <article-title>A systematic review of the neurophysiology of mindfulness on EEG oscillations.</article-title> <source><italic>Neurosci. Biobehav. Rev.</italic></source> <volume>57</volume> <fpage>401</fpage>&#x2013;<lpage>410</lpage>. <pub-id pub-id-type="doi">10.1016/j.neubiorev.2015.09.018</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lupien</surname> <given-names>S. J.</given-names></name> <name><surname>McEwen</surname> <given-names>B. S.</given-names></name> <name><surname>Gunnar</surname> <given-names>M. R.</given-names></name> <name><surname>Heim</surname> <given-names>C.</given-names></name></person-group> (<year>2009</year>). <article-title>Effects of stress throughout the lifespan on the brain, behaviour and cognition.</article-title> <source><italic>Nat. Rev. Neurosci.</italic></source> <volume>10</volume> <fpage>434</fpage>&#x2013;<lpage>445</lpage>. <pub-id pub-id-type="doi">10.1038/nrn2639</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>W.</given-names></name> <name><surname>Lai</surname> <given-names>Y.</given-names></name> <name><surname>Yuan</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>D.</given-names></name> <name><surname>Yao</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Electroencephalogram variations in the &#x03B1; band during tempo-specific perception.</article-title> <source><italic>Neuroreport</italic></source> <volume>23</volume> <fpage>125</fpage>&#x2013;<lpage>128</lpage>. <pub-id pub-id-type="doi">10.1097/WNR.0b013e32834e7eac</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malliani</surname> <given-names>A.</given-names></name> <name><surname>Montano</surname> <given-names>N.</given-names></name></person-group> (<year>2002</year>). <article-title>Heart rate variability as a clinical tool.</article-title> <source><italic>Ital. Heart J.</italic></source> <volume>3</volume> <fpage>439</fpage>&#x2013;<lpage>445</lpage>.</citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malliani</surname> <given-names>A. I.</given-names></name> <name><surname>Pagani</surname> <given-names>M.</given-names></name> <name><surname>Lombardi</surname> <given-names>F.</given-names></name> <name><surname>Cerutti</surname> <given-names>S.</given-names></name></person-group> (<year>1991</year>). <article-title>Cardiovascular neural regulation explored in the frequency domain.</article-title> <source><italic>Circulation</italic></source> <volume>84</volume> <fpage>482</fpage>&#x2013;<lpage>489</lpage>. <pub-id pub-id-type="doi">10.1161/01.CIR.84.2.482</pub-id></citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muraoka</surname> <given-names>T.</given-names></name> <name><surname>Iwahara</surname> <given-names>M.</given-names></name> <name><surname>Yamada</surname> <given-names>Y.</given-names></name></person-group> (<year>1981</year>). <article-title>Examination of audio-bandwidth requirements for optimum sound signal transmission.</article-title> <source><italic>J. Audio Eng. Soc.</italic></source> <volume>29</volume> <fpage>2</fpage>&#x2013;<lpage>9</lpage>.</citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nishiguchi</surname> <given-names>T.</given-names></name> <name><surname>Hamasaki</surname> <given-names>K.</given-names></name> <name><surname>Ono</surname> <given-names>K.</given-names></name> <name><surname>Iwaki</surname> <given-names>M.</given-names></name> <name><surname>Ando</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>Perceptual discrimination of very high frequency components in wide frequency range musical sound.</article-title> <source><italic>Appl. Acoust.</italic></source> <volume>70</volume> <fpage>921</fpage>&#x2013;<lpage>934</lpage>. <pub-id pub-id-type="doi">10.1016/j.apacoust.2009.01.002</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Okamoto</surname> <given-names>Y.</given-names></name> <name><surname>Nakagawa</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>Effects of light wavelength on MEG ERD/ERS during a working memory task.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>104</volume> <fpage>10</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2016.03.008</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oohashi</surname> <given-names>T.</given-names></name> <name><surname>Kawai</surname> <given-names>N.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Honda</surname> <given-names>M.</given-names></name> <name><surname>Yagi</surname> <given-names>R.</given-names></name> <name><surname>Nakamura</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2006</year>). <article-title>The role of biological system other than auditory air-conduction in the emergence of the hypersonic effect.</article-title> <source><italic>Brain Res.</italic></source> <volume>107</volume> <fpage>339</fpage>&#x2013;<lpage>347</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2005.12.096</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oohashi</surname> <given-names>T.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Honda</surname> <given-names>M.</given-names></name> <name><surname>Yonekura</surname> <given-names>Y.</given-names></name> <name><surname>Fuwamoto</surname> <given-names>Y.</given-names></name> <name><surname>Kawai</surname> <given-names>N.</given-names></name><etal/></person-group> (<year>2000</year>). <article-title>Inaudible high-frequency sounds affect brain activity: hypersonic effect.</article-title> <source><italic>J. Neurophysiol.</italic></source> <volume>83</volume> <fpage>3548</fpage>&#x2013;<lpage>3558</lpage>.</citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parasuraman</surname> <given-names>R.</given-names></name></person-group> (<year>1983</year>). <article-title>&#x201C;Vigilance, arousal, and the brain,&#x201D; in</article-title> <source><italic>Physiological Correlates of Human Behavior</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Gale</surname> <given-names>A.</given-names></name> <name><surname>Edwards</surname> <given-names>J. A.</given-names></name></person-group> (<publisher-loc>London</publisher-loc>: <publisher-name>Academic Press</publisher-name>), <fpage>35</fpage>&#x2013;<lpage>55</lpage>.</citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pavlygina</surname> <given-names>R. A.</given-names></name> <name><surname>Sakharov</surname> <given-names>D. S.</given-names></name> <name><surname>Davydov</surname> <given-names>V. I.</given-names></name></person-group> (<year>2004</year>). <article-title>Spectral analysis of the human EEG during listening to musical compositions.</article-title> <source><italic>Hum. Physiol.</italic></source> <volume>30</volume> <fpage>54</fpage>&#x2013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.1023/B:HUMP.0000013765.64276.e6</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Lopes da Silva</surname> <given-names>F. H.</given-names></name></person-group> (<year>1999</year>). <article-title>Event-related EEG/ MEG synchronization and desynchronization: basic principles.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>110</volume> <fpage>1842</fpage>&#x2013;<lpage>1857</lpage>. <pub-id pub-id-type="doi">10.1016/S1388-2457(99)00141-8</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pfurtscheller</surname> <given-names>G.</given-names></name> <name><surname>Woertz</surname> <given-names>M.</given-names></name> <name><surname>Supp</surname> <given-names>G.</given-names></name> <name><surname>Lopes da Silva</surname> <given-names>F. H.</given-names></name></person-group> (<year>2003</year>). <article-title>Early onset of post-movement beta electroencephalogram synchronization in the supplementary motor area during self-paced finger movement in man.</article-title> <source><italic>Neurosci. Lett.</italic></source> <volume>339</volume> <fpage>111</fpage>&#x2013;<lpage>114</lpage>. <pub-id pub-id-type="doi">10.1016/S0304-3940(02)01479-9</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polich</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Updating P300: an integrative theory of P3a and P3b.</article-title> <source><italic>Clin. Neurophysiol.</italic></source> <volume>118</volume> <fpage>2128</fpage>&#x2013;<lpage>2148</lpage>. <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.019</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Russell</surname> <given-names>J. A.</given-names></name> <name><surname>Weiss</surname> <given-names>A.</given-names></name> <name><surname>Mendelsohn</surname> <given-names>G. A.</given-names></name></person-group> (<year>1989</year>). <article-title>Affect grid &#x2013; a single-item scale of pleasure and arousal.</article-title> <source><italic>J. Pers. Soc. Psychol.</italic></source> <volume>57</volume> <fpage>493</fpage>&#x2013;<lpage>502</lpage>. <pub-id pub-id-type="doi">10.1037/0022-3514.57.3.493</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanyal</surname> <given-names>S.</given-names></name> <name><surname>Banerjee</surname> <given-names>A.</given-names></name> <name><surname>Guhathakurta</surname> <given-names>T.</given-names></name> <name><surname>Sengupta</surname> <given-names>R.</given-names></name> <name><surname>Ghosh</surname> <given-names>D.</given-names></name> <name><surname>Ghose</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>&#x201C;EEG study on the neural patterns of brain with music stimuli: an evidence of Hysteresis?,&#x201D; in</article-title> <source><italic>Proceedings of the International Seminar on &#x2018;Creating and Teaching Music Patterns&#x2019;</italic></source>, (<publisher-loc>Kolkata</publisher-loc>: <publisher-name>Rabindra Bharati University</publisher-name>), <fpage>51</fpage>&#x2013;<lpage>61</lpage>.</citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schaefer</surname> <given-names>R. S.</given-names></name> <name><surname>Vlek</surname> <given-names>R. J.</given-names></name> <name><surname>Desain</surname> <given-names>P.</given-names></name></person-group> (<year>2011</year>). <article-title>Music perception and imagery in EEG: alpha band effects of task and stimulus.</article-title> <source><italic>Int. J. Psychophysiol.</italic></source> <volume>82</volume> <fpage>254</fpage>&#x2013;<lpage>259</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2011.09.007</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sebastiani</surname> <given-names>L.</given-names></name> <name><surname>Simoni</surname> <given-names>A.</given-names></name> <name><surname>Gemignani</surname> <given-names>A.</given-names></name> <name><surname>Ghelarducci</surname> <given-names>B.</given-names></name> <name><surname>Santarcangelo</surname> <given-names>E. L.</given-names></name></person-group> (<year>2003</year>). <article-title>Autonomic and EEG correlates of emotional imagery in subjects with different hypnotic susceptibility.</article-title> <source><italic>Brain Res. Bull.</italic></source> <volume>60</volume> <fpage>151</fpage>&#x2013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1016/S0361-9230(03)00025-X</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Suzuki</surname> <given-names>K.</given-names></name></person-group> (<year>2013</year>). <article-title>Hypersonic effect and performance of recognition tests (in Japanese).</article-title> <source><italic>Kagaku</italic></source> <volume>83</volume> <fpage>343</fpage>&#x2013;<lpage>345</lpage>.</citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tarvainen</surname> <given-names>M. P.</given-names></name> <name><surname>Niskanen</surname> <given-names>J.-P.</given-names></name> <name><surname>Lipponen</surname> <given-names>J. A.</given-names></name> <name><surname>Ranta-Aho</surname> <given-names>P. O.</given-names></name> <name><surname>Karjalainen</surname> <given-names>P. A.</given-names></name></person-group> (<year>2014</year>). <article-title>Kubios HRV&#x2013;heart rate variability analysis software.</article-title> <source><italic>Comput. Methods Programs Biomed.</italic></source> <volume>113</volume> <fpage>210</fpage>&#x2013;<lpage>220</lpage>. <pub-id pub-id-type="doi">10.1016/j.cmpb.2013.07.024</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Terasaki</surname> <given-names>M.</given-names></name> <name><surname>Kishimoto</surname> <given-names>Y.</given-names></name> <name><surname>Koga</surname> <given-names>A.</given-names></name></person-group> (<year>1992</year>). <article-title>Construction of a multiple mood scale (In Japanese).</article-title> <source><italic>Shinrigaku Kenkyu</italic></source> <volume>62</volume> <fpage>350</fpage>&#x2013;<lpage>356</lpage>. <pub-id pub-id-type="doi">10.4992/jjpsy.62.350</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tian</surname> <given-names>Y.</given-names></name> <name><surname>Ma</surname> <given-names>W.</given-names></name> <name><surname>Tian</surname> <given-names>C.</given-names></name> <name><surname>Xu</surname> <given-names>P.</given-names></name> <name><surname>Yao</surname> <given-names>D.</given-names></name></person-group> (<year>2013</year>). <article-title>Brain oscillations and electroencephalography scalp networks during tempo perception.</article-title> <source><italic>Neurosci. Bull.</italic></source> <volume>29</volume> <fpage>731</fpage>&#x2013;<lpage>736</lpage>. <pub-id pub-id-type="doi">10.1007/s12264-013-1352-9</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vasey</surname> <given-names>M. W.</given-names></name> <name><surname>Thayer</surname> <given-names>J. F.</given-names></name></person-group> (<year>1987</year>). <article-title>The continuing problem of false positives in repeated measures ANOVA in psychophysiology: a multivariate solution.</article-title> <source><italic>Psychophysiology</italic></source> <volume>24</volume> <fpage>479</fpage>&#x2013;<lpage>486</lpage>. <pub-id pub-id-type="doi">10.1111/j.1469-8986.1987.tb00324.x</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vogt</surname> <given-names>B. A.</given-names></name> <name><surname>Gabriel</surname> <given-names>M.</given-names></name></person-group> (<year>1993</year>). <source><italic>Neurobiology of Cingulate Cortex and Limbic Thalamus. A Comprehensive Handbook.</italic></source> <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Birkhauser</publisher-name>.</citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ward</surname> <given-names>L. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Synchronous neural oscillations and cognitive processes.</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>7</volume> <fpage>553</fpage>&#x2013;<lpage>559</lpage>. <pub-id pub-id-type="doi">10.1016/j.tics.2003.10.012</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Weisz</surname> <given-names>N.</given-names></name> <name><surname>Hartmann</surname> <given-names>T.</given-names></name> <name><surname>Muller</surname> <given-names>N.</given-names></name> <name><surname>Lorenz</surname> <given-names>I.</given-names></name> <name><surname>Obleser</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>Alpha rhythms in audition: cognitive and clinical perspectives.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>2</volume>:<issue>73</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2011.00073</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yagi</surname> <given-names>R.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Honda</surname> <given-names>M.</given-names></name> <name><surname>Oohashi</surname> <given-names>T.</given-names></name></person-group> (<year>2003a</year>). <article-title>Modulatory effect of inaudible high-frequency sounds on human acoustic perception.</article-title> <source><italic>Neurosci. Lett.</italic></source> <volume>351</volume> <fpage>191</fpage>&#x2013;<lpage>195</lpage>. <pub-id pub-id-type="doi">10.1016/j.neulet.2003.07.020</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yagi</surname> <given-names>R.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Oohashi</surname> <given-names>T.</given-names></name></person-group> (<year>2003b</year>). <article-title>A method for behavioral evaluation of the &#x201C;hypersonic effect&#x201D;.</article-title> <source><italic>Acoust. Sci. Technol.</italic></source> <volume>24</volume> <fpage>197</fpage>&#x2013;<lpage>200</lpage>. <pub-id pub-id-type="doi">10.1250/ast.24.197</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yagi</surname> <given-names>T.</given-names></name> <name><surname>Kawai</surname> <given-names>N.</given-names></name> <name><surname>Nishina</surname> <given-names>E.</given-names></name> <name><surname>Honda</surname> <given-names>M.</given-names></name> <name><surname>Yagi</surname> <given-names>R.</given-names></name> <name><surname>Nakamura</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2006</year>). <article-title>The role of biological system other than auditory air-conduction in the emergence of the hypersonic effect.</article-title> <source><italic>Brain Res.</italic></source> <fpage>1073</fpage>&#x2013;<lpage>1074</lpage>, <fpage>339</fpage>&#x2013;<lpage>347</lpage>. <pub-id pub-id-type="doi">10.1016/j.brainres.2005.12.096</pub-id></citation></ref>
</ref-list>
</back>
</article>