<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" article-type="research-article" dtd-version="2.3">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.933497</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The impact of background music on film audience&#x2019;s attentional processes: Electroencephalography alpha-rhythm and event-related potential analyses</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Kwon</surname>
<given-names>Young-Sung</given-names>
</name>
<xref rid="aff1" ref-type="aff"><sup>1</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1794917/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Lee</surname>
<given-names>Jonghyun</given-names>
</name>
<xref rid="aff2" ref-type="aff"><sup>2</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/2079112/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Lee</surname>
<given-names>Slgi (Sage)</given-names>
</name>
<xref rid="aff3" ref-type="aff"><sup>3</sup></xref>
<xref rid="c001" ref-type="corresp"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1794884/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Media and Communication, Dong-A University</institution>, <addr-line>Busan</addr-line>, <country>South Korea</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of English Language and Literature, College of Humanities, Seoul National University</institution>, <addr-line>Seoul</addr-line>, <country>South Korea</country></aff>
<aff id="aff3"><sup>3</sup><institution>Department of Media and Communication, Pusan National University</institution>, <addr-line>Busan</addr-line>, <country>South Korea</country></aff>
<author-notes>
<fn id="fn0001" fn-type="edited-by"><p>Edited by: Graham Frederick Welch, University College London, United Kingdom</p></fn>
<fn id="fn0002" fn-type="edited-by"><p>Reviewed by: Roman Mou&#x010D;ek, University of West Bohemia, Czechia; Hun Kim, Joongbu University, South Korea</p></fn>
<corresp id="c001">&#x002A;Correspondence: Slgi (Sage) Lee, <email>sg.lee@pusan.ac.kr</email></corresp>
<fn id="fn0003" fn-type="other"><p>This article was submitted to Performance Science, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>17</day>
<month>11</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>933497</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>05</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>27</day>
<month>10</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Kwon, Lee and Lee.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Kwon, Lee and Lee</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Background music is an indispensable part of films and plays an important role in enhancing audiences&#x2019; attention to scenes. However, few studies have examined the cognitive effect of background music at the neurophysiological level. Using electroencephalography (EEG), the present study examines the effect of background music tempo on the viewer&#x2019;s attentional processes. Participants&#x2019; (<italic>N</italic>&#x2009;=&#x2009;24) EEG responses were recorded while the participants watched segments of action films in three conditions with variations on the presence and tempo of background music (i.e., no background music vs. slow-tempo music vs. fast-tempo music). These responses were analyzed using the alpha-rhythm suppression and event-related potential (ERP) P300, a brainwave indicator of attentional processes. The results suggest that participants&#x2019; attention levels increased when background music was present (compared to when background music was absent), but there was no difference in participants&#x2019; attention levels based on tempo. The theoretical and practical implications of these findings are discussed.</p>
</abstract>
<kwd-group>
<kwd>background music</kwd>
<kwd>tempo</kwd>
<kwd>film music</kwd>
<kwd>attentional processes</kwd>
<kwd>EEG</kwd>
<kwd>ERP</kwd>
<kwd>alpha-rhythm suppression</kwd>
</kwd-group>
<contract-num rid="cn1">NRF-2020R1G1A1101384</contract-num>
<contract-sponsor id="cn1">National Research Foundation of Korea (NRF)<named-content content-type="fundref-id">10.13039/501100003725</named-content>
</contract-sponsor>
<counts>
<fig-count count="3"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="48"/>
<page-count count="9"/>
<word-count count="6339"/>
</counts>
</article-meta>
</front>
<body>
<sec id="sec1" sec-type="intro">
<title>Introduction</title>
<p>Sound is an important element of entertainment media. For instance, well-designed sound effects or background music in a film can positively influence audience engagement in that film. In films, sound consists of background music, sound effects, and characters&#x2019; dialog. In combination with visual components (i.e., video footage), sound contributes to forming the overall mood of a film and inducing emotional responses from the audience (<xref ref-type="bibr" rid="ref14">Holman, 2012</xref>). In particular, background music can influence viewers&#x2019; cognitive processes involved in perceiving and learning from visual components of the content (<xref ref-type="bibr" rid="ref17">K&#x00E4;mpfe et al., 2011</xref>; <xref ref-type="bibr" rid="ref7">Du et al., 2020</xref>). For instance, research suggests that people better recall visual information they acquire while being exposed to background music (<xref ref-type="bibr" rid="ref39">Proverbio et al., 2015</xref>; <xref ref-type="bibr" rid="ref31">Nguyen and Grahn, 2017</xref>). Background music can also help individuals immersed in the plot of multimedia content (<xref ref-type="bibr" rid="ref47">Zhang and Fu, 2015</xref>) and induce motor resonance, a neurophysiological indication that viewers are immersed in the action performed by characters in films (<xref ref-type="bibr" rid="ref22">Kwon et al., 2021</xref>).</p>
<p>As background music is considered an important factor in audience engagement, sound designing&#x2014;creating and inserting various sound elements in media content&#x2014;has become an indispensable part of film production. Research in this area has examined how producers use various sound elements to draw and maintain viewers&#x2019; attention (e.g., <xref ref-type="bibr" rid="ref28">Mitchell, 2005</xref>; <xref ref-type="bibr" rid="ref14">Holman, 2012</xref>); however, relatively little is known about whether such use of sound elicits high levels of attention among the audience. Moreover, in the few studies that have examined the effects of background music on viewers, self-report data (<italic>cf.</italic> <xref ref-type="bibr" rid="ref17">K&#x00E4;mpfe et al., 2011</xref>; <xref ref-type="bibr" rid="ref47">Zhang and Fu, 2015</xref>; <xref ref-type="bibr" rid="ref24">Lee et al., 2020</xref>) have predominantly been used. However, self-report data are limited in terms of objectively capturing subtle and subconscious changes in individuals&#x2019; attentional processes that occur in response to exposure to a series of visual and auditory stimuli (<xref ref-type="bibr" rid="ref22">Kwon et al., 2021</xref>). Instead, neurophysiological methodologies such as electroencephalogram (EEG) are required to directly measure and quantify participants&#x2019; cognitive processes. Hence, the present study uses neurophysiological data from EEG to examine the impact of background music on viewers&#x2019; attentional processes.</p>
<p>Several studies to date have used EEG to explore the effect of background music on attentional processes (<xref ref-type="bibr" rid="ref001">Iwaki et al., 1997</xref>; <xref ref-type="bibr" rid="ref7">Du et al., 2020</xref>; <xref ref-type="bibr" rid="ref2">Ausin et al., 2021</xref>; <xref ref-type="bibr" rid="ref45">Uhm et al., 2021</xref>). Most of these studies have explored how background music affects listeners&#x2019; attention during various activities or situations, such as while studying, resting, or watching an advertisement. However, studies examining how background music affects the audience of <italic>video media</italic>, particularly film, are scant. It should also be noted that different types of background music&#x2014;depending on when or where they are used&#x2014;serve different functions and purposes. For instance, background music played during studying might be considered as noise or &#x201C;background&#x201D; music, as the music is not so relevant to the foreground activity (i.e., studying). In contrast, background music in film is pre-selected and is closely tied to the video. Background music inserted in advertisements also has a specific purpose: to induce the audience to remember the product and eventually purchase it, which is quite different from the purpose served by music used in films. Therefore, it is difficult to generalize the prior findings to the context of film background music. Furthermore, measuring attentional processes of film audiences using EEG can offer useful information and a practical guide for optimizing sound-designing in film content production.</p>
<p>The present study particularly focuses on the <italic>tempo</italic> of background music and examines whether various degrees of tempo (slow vs. fast) elicit different levels of attention among viewers. As a basic element of music&#x2014;along with tonality, range, intensity, and instrument&#x2014;tempo helps form the overall mood of a piece of music. Tempo is also known to influence the listener&#x2019;s behavior and affect; music with a fast tempo evokes positive-valence emotions such as happiness, whereas slow music triggers negative emotions such as sadness (<xref ref-type="bibr" rid="ref10">Gagnon and Peretz, 2003</xref>). Music tempo can also affect the tempo of the listeners&#x2019; activities that are being performed while the listeners are exposed to music (<xref ref-type="bibr" rid="ref17">K&#x00E4;mpfe et al., 2011</xref>).</p>
<p>Relatedly, the tempo of background music plays an important role in setting the ambience of a scene. In film production, music with different tempos is strategically used to produce different moods. For example, for scenes depicting high-tension activities, such as fighting or running, fast music is inserted to convey the tension of the activities, whereas slow music is often used when producers need to create a soft and relaxing ambience. This is based on the conventional notion that the audience will be more absorbed in the scene when the rhythms of activities portrayed and the background music are harmoniously matched. However, there is no empirical evidence, to our knowledge, to suggest that such a match in tempo would lead to a high level of attention among the audience.</p>
<p>Moreover, prior research offers mixed findings about how tempo can affect listeners&#x2019; cognitive activities. As noted, studies have found that tempo can determine listeners&#x2019; emotions and the pace of their actions. For instance, fast music can induce upbeat emotions (<xref ref-type="bibr" rid="ref26">Liu et al., 2018a</xref>,<xref ref-type="bibr" rid="ref25">b</xref>) and fast-paced behaviors (<xref ref-type="bibr" rid="ref17">K&#x00E4;mpfe et al., 2011</xref>). This indicates that a fast tempo can result in some degree of stimulation in listeners&#x2019; cognitive activities. Other studies, however, suggest that fast-beat background music is more detrimental than slow music to the cognitive processes involved in perceiving and evaluating visual information (<xref ref-type="bibr" rid="ref16">Kallinen, 2002</xref>). Such conflicting findings on tempo make it difficult to predict how background music with different tempos may affect viewers&#x2019; attentional processes toward films.</p>
<p>Based on this set of considerations discussed, the present study explores whether (a) the presence of background music and (b) its tempo can enhance viewers&#x2019; attention to film scenes. Specifically, we ask the following questions: (a) Does the <italic>presence</italic> of background music lead to an increase in attentional processes toward film scenes?; (b) How does the <italic>tempo</italic> (fast vs. slow) of background music influence the viewers&#x2019; attentional processes toward film scenes?</p>
<p>To measure attentional processes, the present study used EEG. Through the use of small metal discs (electrodes) attached to an individual&#x2019;s scalp, EEG measures the electrical impulses generated from the transmission between nerve cells when the individual is engaged in a cognitive activity. In this study, EEG was employed rather than other neurophysiological methodologies, such as magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and computer tomography (CT), because EEG is superior in measuring <italic>real-time</italic> brain activities, with relatively less obtrusion and interference in subjects&#x2019; physical activity (<xref ref-type="bibr" rid="ref29">Mulert and Lemieux, 2009</xref>). EEG is known as having excellent temporal resolution&#x2014;measuring an instantaneous brain cognitive processing response in 0.001&#x2009;s&#x2014;and is considered a suitable method for analyzing information processing that takes place rapidly in real time, such as image recognition (<xref ref-type="bibr" rid="ref40">Sanei and Chambers, 2013</xref>; <xref ref-type="bibr" rid="ref6">Cohen, 2014</xref>). EEG is, therefore, appropriate for an experiment that requires the recording and detecting of immediate changes in participants&#x2019; cognitive processes when participants watch a stream of visual (e.g., film footage) and auditory stimuli (e.g., background music; <xref ref-type="bibr" rid="ref21">Kwon and Lee, 2020</xref>).</p>
<p>Specifically, we used alpha-rhythm suppression and event-related potentials (ERP) as indices to measure the attentional process. Alpha-rhythm inhibition, measured at 7&#x2013;13&#x2009;Hz, is known to be an indicator related to the attentional process. This can be used as an indicator of how much the audience pays attention to an image (<xref ref-type="bibr" rid="ref21">Kwon and Lee, 2020</xref>). This study also used event-related potentials (ERP) analysis, which is a commonly used EEG in speech-processing research in the field of cognitive science. ERP uses a type of EEG that is obtained by presenting an auditory stimulus to a subject and measuring their brain activity at that moment. P300, one of the ERP elements, was used as an index to measure participants&#x2019; attention level (<xref ref-type="bibr" rid="ref20">Kwon and Ha, 2019</xref>).</p>
</sec>
<sec id="sec2" sec-type="materials|methods">
<title>Materials and methods</title>
<sec id="sec3">
<title>Participants and procedure</title>
<p>A total of 24 right-handed participants (10 women, 14 men) were recruited from a university in Seoul, South Korea. The average age of the subjects was 23.04&#x2009;years (<italic>SD</italic>&#x2009;=&#x2009;2.94). The number of participants was determined based on the power analyses (Sig. level&#x2009;=&#x2009;0.05, power&#x2009;=&#x2009;0.95, effect size&#x2009;=&#x2009;0.3), using previous studies that used EEG under similar conditions. The experiment has been approved by the Internal Review Board of one of the authors&#x2019; affiliated universities.</p>
<p>The experiment was conducted in a sound-proof laboratory that blocks external electromagnetic waves and noise. Upon participants&#x2019; arrival, they were ushered to a sound-proof laboratory and seated in front of a desk with a 21-inch monitor on which stimuli footage would be shown. Electrode gel was then applied to the participants to help the sensors stick to the participants&#x2019; scalps more easily. Ten-minute long stimulus footage was presented to each participant using PsychoPy3. Participants wore EEG gear throughout the experiment while they watched the stimulus footage. At the beginning of the experimental session, all participants were instructed to pay attention to each scene and minimize their eye blinking and body movement while watching the stimuli.</p>
</sec>
<sec id="sec4">
<title>Stimulus</title>
<p>The stimuli footage used in the experiment consisted of 45 action scenes (edited from 26 Korean and international action films) in which one subject strikes another. These action (striking) scenes were selected for the following reasons: (1) the action scenes involve large movements in action and were therefore expected to induce a high level of attention, especially at the moment of striking; (2) the clear display of the target behavior (striking) in these scenes creates a concrete point to which participants can fixate their attention. These make action scenes suitable for measuring alpha-rhythm inhibition and event-related potentials (ERP), indicators related to the attentional process. The action scenes also included various motions (i.e., striking with fists, kicking, etc.), with different numbers of characters involved in the actions (two, three, or more people), various framing shots (i.e., close-up, medium shot, and long shot) and shooting techniques (i.e., time-lapse, high-speed shooting, etc.) to avoid bias that may occur from using a particular motion or technique only.</p>
<p>The sequence of these scenes was designed using a counter-balanced paradigm (<xref ref-type="bibr" rid="ref12">Goodwin, 2009</xref>; <xref ref-type="bibr" rid="ref4">Burns and Dobson, 2012</xref>; <xref ref-type="bibr" rid="ref43">Stangor, 2014</xref>). Within a counter-balanced paradigm, each participant is exposed to all treatment conditions where each condition is presented in different orders. A counter-balanced paradigm thus prevents the order effect in which the order of image stimulation affects the dependent variable. The paradigm can also reduce the bias that may occur due to participants&#x2019; individual differences. Counter-balanced paradigm is therefore commonly used in neurophysiological studies that use two or more experimental conditions (e.g., <xref ref-type="bibr" rid="ref12">Goodwin, 2009</xref>).</p>
<p>The stimuli footage used in the experiment consisted of 45 action scenes, edited from 26 films. Three different versions (no background music, slow-tempo music, fast-tempo music) were made from each scene; therefore, a total of 135 scenes (45 scenes&#x2009;&#x00D7;&#x2009;3 versions) were produced. Three different sets of sequences were then created by randomly ordering the 135 scenes. Each sequence set included 45 different scenes and was assigned to either Group A, B, or C. Participants randomly assigned to one of the three groups were thus shown one of the sequences. Each scene in the sequence was 8&#x2009;s-long, and 5&#x2009;s-long black screens were shown between each scene to measure each participant&#x2019;s baseline (see <xref rid="fig1" ref-type="fig">Figure 1</xref>). The sequence of the scenes shown is presented in <xref rid="tab1" ref-type="table">Table 1</xref>.</p>
<fig position="float" id="fig1">
<label>Figure 1</label>
<caption>
<p>Video composition for individual stimuli.</p>
</caption>
<graphic xlink:href="fpsyg-13-933497-g001.tif"/>
</fig>
<table-wrap position="float" id="tab1">
<label>Table 1</label>
<caption>
<p>The sequence of the counter-balanced stimulus presentation.</p>
</caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="top">Order</th>
<th align="center" valign="top"><bold>1</bold></th>
<th align="center" valign="top"><bold>2</bold></th>
<th align="center" valign="top"><bold>3</bold></th>
<th align="center" valign="top"><bold>4</bold></th>
<th align="center" valign="top"><bold>5</bold></th>
<th align="center" valign="top"><bold>6</bold></th>
<th align="center" valign="top"><bold>7</bold></th>
<th align="center" valign="top"><bold>8</bold></th>
<th align="center" valign="top"><bold>9</bold></th>
<th align="center" valign="top"><bold>10</bold></th>
<th align="center" valign="top"><bold>11</bold></th>
<th align="center" valign="top"><bold>12</bold></th>
<th align="center" valign="top"><bold>13</bold></th>
<th align="center" valign="top"><bold>14</bold></th>
<th align="center" valign="top"><bold>15</bold></th>
<th align="center" valign="top"><bold>16</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Group A</td>
<td align="center" valign="top">N-M (1)</td>
<td align="center" valign="top">S-M (16)</td>
<td align="center" valign="top">F-M (31)</td>
<td align="center" valign="top">F-M (32)</td>
<td align="center" valign="top">N-M (2)</td>
<td align="center" valign="top">S-M (17)</td>
<td align="center" valign="top">F-M (33)</td>
<td align="center" valign="top">F-M (34)</td>
<td align="center" valign="top">F-M (35)</td>
<td align="center" valign="top">S-M (18)</td>
<td align="center" valign="top">S-M (19)</td>
<td align="center" valign="top">N-M (3)</td>
<td align="center" valign="top">F-M (36)</td>
<td align="center" valign="top">F-M (37)</td>
<td align="center" valign="top">S-M (20)</td>
<td align="center" valign="top">N-M (4)</td>
</tr>
<tr>
<td align="left" valign="top">Group B</td>
<td align="center" valign="top">F-M (1)</td>
<td align="center" valign="top">N-M (16)</td>
<td align="center" valign="top">S-M (31)</td>
<td align="center" valign="top">S-M (32)</td>
<td align="center" valign="top">F-M (2)</td>
<td align="center" valign="top">N-M (17)</td>
<td align="center" valign="top">S-M (33)</td>
<td align="center" valign="top">F-M (3)</td>
<td align="center" valign="top">S-M (34)</td>
<td align="center" valign="top">S-M (35)</td>
<td align="center" valign="top">N-M (18)</td>
<td align="center" valign="top">F-M (4)</td>
<td align="center" valign="top">F-M (5)</td>
<td align="center" valign="top">S-M (36)</td>
<td align="center" valign="top">N-M (19)</td>
<td align="center" valign="top">F-M (6)</td>
</tr>
<tr>
<td align="left" valign="top">Group C</td>
<td align="center" valign="top">S-M (1)</td>
<td align="center" valign="top">F-M (16)</td>
<td align="center" valign="top">N-M (31)</td>
<td align="center" valign="top">S-M (2)</td>
<td align="center" valign="top">F-M (17)</td>
<td align="center" valign="top">F-M (18)</td>
<td align="center" valign="top">N-M (32)</td>
<td align="center" valign="top">S-M (3)</td>
<td align="center" valign="top">S-M (4)</td>
<td align="center" valign="top">N-M (33)</td>
<td align="center" valign="top">F-M (19)</td>
<td align="center" valign="top">S-M (5)</td>
<td align="center" valign="top">S-M (6)</td>
<td align="center" valign="top">N-M (34)</td>
<td align="center" valign="top">F-M (20)</td>
<td align="center" valign="top">S-M (7)</td>
</tr>
<tr>
<td/>
<td align="center" valign="top"><bold>17</bold></td>
<td align="center" valign="top"><bold>18</bold></td>
<td align="center" valign="top"><bold>19</bold></td>
<td align="center" valign="top"><bold>20</bold></td>
<td align="center" valign="top"><bold>21</bold></td>
<td align="center" valign="top"><bold>22</bold></td>
<td align="center" valign="top"><bold>23</bold></td>
<td align="center" valign="top"><bold>24</bold></td>
<td align="center" valign="top"><bold>25</bold></td>
<td align="center" valign="top"><bold>26</bold></td>
<td align="center" valign="top"><bold>27</bold></td>
<td align="center" valign="top"><bold>28</bold></td>
<td align="center" valign="top"><bold>29</bold></td>
<td align="center" valign="top"><bold>30</bold></td>
<td align="center" valign="top"><bold>31</bold></td>
<td align="center" valign="top"><bold>32</bold></td>
</tr>
<tr>
<td/>
<td align="center" valign="top">S-M (21)</td>
<td align="center" valign="top">S-M (22)</td>
<td align="center" valign="top">N-M (5)</td>
<td align="center" valign="top">F-M (38)</td>
<td align="center" valign="top">S-M (23)</td>
<td align="center" valign="top">F-M (39)</td>
<td align="center" valign="top">N-M (6)</td>
<td align="center" valign="top">F-M (40)</td>
<td align="center" valign="top">N-M (7)</td>
<td align="center" valign="top">S-M (24)</td>
<td align="center" valign="top">F-M (41)</td>
<td align="center" valign="top">F-M (42)</td>
<td align="center" valign="top">S-M (25)</td>
<td align="center" valign="top">N-M (8)</td>
<td align="center" valign="top">S-M (26)</td>
<td align="center" valign="top">F-M (43)</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">N-M (20)</td>
<td align="center" valign="top">S-M (37)</td>
<td align="center" valign="top">F-M (7)</td>
<td align="center" valign="top">F-M (8)</td>
<td align="center" valign="top">N-M (21)</td>
<td align="center" valign="top">S-M (38)</td>
<td align="center" valign="top">F-M (9)</td>
<td align="center" valign="top">S-M (39)</td>
<td align="center" valign="top">F-M (10)</td>
<td align="center" valign="top">N-M (22)</td>
<td align="center" valign="top">S-M (40)</td>
<td align="center" valign="top">F-M (11)</td>
<td align="center" valign="top">F-M (12)</td>
<td align="center" valign="top">N-M (23)</td>
<td align="center" valign="top">S-M (41)</td>
<td align="center" valign="top">F-M (13)</td>
</tr>
<tr>
<td/>
<td align="center" valign="top">F-M (21)</td>
<td align="center" valign="top">N-M (35)</td>
<td align="center" valign="top">F-M (22)</td>
<td align="center" valign="top">S-M (8)</td>
<td align="center" valign="top">F-M (23)</td>
<td align="center" valign="top">N-M (36)</td>
<td align="center" valign="top">F-M (24)</td>
<td align="center" valign="top">S-M (9)</td>
<td align="center" valign="top">F-M (25)</td>
<td align="center" valign="top">F-M (26)</td>
<td align="center" valign="top">N-M (37)</td>
<td align="center" valign="top">S-M (10)</td>
<td align="center" valign="top">F-M (27)</td>
<td align="center" valign="top">F-M (28)</td>
<td align="center" valign="top">N-M (38)</td>
<td align="center" valign="top">S-M (11)</td>
</tr>
<tr>
<td/>
<td align="center" valign="top"><bold>33</bold></td>
<td align="center" valign="top"><bold>34</bold></td>
<td align="center" valign="top"><bold>35</bold></td>
<td align="center" valign="top"><bold>36</bold></td>
<td align="center" valign="top"><bold>37</bold></td>
<td align="center" valign="top"><bold>38</bold></td>
<td align="center" valign="top"><bold>39</bold></td>
<td align="center" valign="top"><bold>40</bold></td>
<td align="center" valign="top"><bold>41</bold></td>
<td align="center" valign="top"><bold>42</bold></td>
<td align="center" valign="top"><bold>43</bold></td>
<td align="center" valign="top"><bold>44</bold></td>
<td align="center" valign="top"><bold>45</bold></td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="center" valign="top">N-M (9)</td>
<td align="center" valign="top">S-M (27)</td>
<td align="center" valign="top">S-M<break/>(28)</td>
<td align="center" valign="top">N-M (10)</td>
<td align="center" valign="top">S-M (29)</td>
<td align="center" valign="top">N-M (11)</td>
<td align="center" valign="top">F-M (45)</td>
<td align="center" valign="top">N-M (12)</td>
<td align="center" valign="top">N-M (13)</td>
<td align="center" valign="top">F-M (44)</td>
<td align="center" valign="top">N-M (14)</td>
<td align="center" valign="top">S-M (30)</td>
<td align="center" valign="top">N-M (15)</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="center" valign="top">S-M (43)</td>
<td align="center" valign="top">N-M (30)</td>
<td align="center" valign="top">N-M (25)</td>
<td align="center" valign="top">F-M (15)</td>
<td align="center" valign="top">F-M (14)</td>
<td align="center" valign="top">N-M (26)</td>
<td align="center" valign="top">N-M (27)</td>
<td align="center" valign="top">S-M (44)</td>
<td align="center" valign="top">N-M (28)</td>
<td align="center" valign="top">N-M (24)</td>
<td align="center" valign="top">S-M (45)</td>
<td align="center" valign="top">N-M (29)</td>
<td align="center" valign="top">S-M (42)</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td align="center" valign="top">N-M (39)</td>
<td align="center" valign="top">S-M (12)</td>
<td align="center" valign="top">N-M (40)</td>
<td align="center" valign="top">S-M<break/>(13)</td>
<td align="center" valign="top">N-M (41)</td>
<td align="center" valign="top">F-M (29)</td>
<td align="center" valign="top">N-M (42)</td>
<td align="center" valign="top">F-M (30)</td>
<td align="center" valign="top">N-M (43)</td>
<td align="center" valign="top">S-M (14)</td>
<td align="center" valign="top">N-M (44)</td>
<td align="center" valign="top">S-M (15)</td>
<td align="center" valign="top">N-M (45)</td>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>The number in parentheses&#x2009;=&#x2009;selected stimulus number. N-M&#x2009;=&#x2009;No Music, F-M&#x2009;=&#x2009;Fast tempo music, S-M&#x2009;=&#x2009;Slow tempo music.</p>
</table-wrap-foot>
</table-wrap>
<p>To accentuate the moment where striking action occurs in the scenes, the striking sound effect was uniformly inserted in all scenes. Any other sound or sound effect was removed in all conditions. The background music inserted in the slow- and fast-tempo conditions was composed using the same drum solo sound, but it differed in its tempo (slow vs. fast tempo). The purpose of using a drum solo was to eliminate the influence of other musical elements than tempo, such as tonality and range. The beats per minute (BPM) for the two types of background music used in the slow- and fast-tempo conditions were 144 BPM (fast tempo) and 48 BPM (slow tempo), respectively classified as <italic>allegro</italic> (fast, BPM&#x2009;=&#x2009;120&#x2009;~&#x2009;168) and <italic>largo</italic> (slow, BPM&#x2009;=&#x2009;40&#x2009;~&#x2009;60) in prior studies (<xref ref-type="bibr" rid="ref32">Palit and Aysia, 2015</xref>; <xref ref-type="bibr" rid="ref8">Fern&#x00E1;ndez-Sotos et al., 2016</xref>; <xref ref-type="bibr" rid="ref26">Liu et al., 2018a</xref>,<xref ref-type="bibr" rid="ref25">b</xref>).</p>
</sec>
<sec id="sec5">
<title>Electroencephalogram analysis</title>
<sec id="sec6">
<title>Electroencephalography recordings</title>
<p>Electroencephalogram was recorded from 32&#x2009;Ag/AgCl electrodes mounted in an ActiCap (Brain Products GmbH, Germany), arranged following the 10/20 system, using BrainVision Recorder (BrainProducts GmbH). The activities of the alpha rhythms have traditionally been linked to the attentional process of the brain (<xref ref-type="bibr" rid="ref18">Klimesch, 2012</xref>), and overall suppression in alpha power has been linked to an increase in attention in general (<xref ref-type="bibr" rid="ref19">Klimesch et al., 1997</xref>; <xref ref-type="bibr" rid="ref41">Sauseng and Klimesch, 2008</xref>). Alpha-rhythm suppression originally appears in the occipital lobe, frequency distribution of 8&#x2013;13&#x2009;<italic>Hz</italic>, and it is actively observed at the O1 and O2 electrodes (<xref ref-type="bibr" rid="ref3">Berger, 1929</xref>; <xref ref-type="bibr" rid="ref9">Fu et al., 2001</xref>; <xref ref-type="bibr" rid="ref34">Perry et al., 2011</xref>). Among the electrode sites, O1 and O2 located in the occipital lobe were used in the present analysis as the alpha rhythm measurement range. The EEG data of each participant was segmented separately among the three conditions followed by a baseline correction and average calculation for each condition. By subtracting the baseline from each condition, three difference waves and the grand average values were obtained for each participant&#x2019;s data.</p>
<p>Along with the alpha-rhythm suppression, an ERP element named P300 was used to evaluate participants&#x2019; attention function. P300 appears in the process of changing the cognitive model, which was established for the subject to successfully respond to a specific task (<xref ref-type="bibr" rid="ref37">Polich, 2007</xref>). What is important is that the brain becomes involved in attention-related cognitive function in this process (<xref ref-type="bibr" rid="ref36">Polich, 2003</xref>; <xref ref-type="bibr" rid="ref44">Sur and Sinha, 2009</xref>). Thus, whether the subject paid attention to the corresponding experimental stimulus can be detected by measuring P300, and how much the subject concentrated on the stimulus can be explored through the amplitude of ERP. P300 occurs in a positive electrode of 200&#x2009;~&#x2009;400&#x2009;ms after the stimulus is presented (<xref ref-type="bibr" rid="ref13">Hillyard and Kutas, 1983</xref>; <xref ref-type="bibr" rid="ref35">Picton, 1992</xref>), and it is actively observed at the Fz and Cz electrodes located in the frontal and central lobe (<xref ref-type="bibr" rid="ref36">Polich, 2003</xref>, <xref ref-type="bibr" rid="ref37">2007</xref>; <xref ref-type="bibr" rid="ref38">Polich and Criado, 2006</xref>). The present analysis therefore focused on Fz and Cz in the time windows between 200 and 400 ms to measure P300.</p>
<p>Additional electrodes (vertical electrooculogram and horizontal electrooculogram) were attached next to and below the eyes to detect noises caused by eye activities, such as blinking. All electrode impedances were maintained below 10&#x2009;k&#x03A9;. Extracted signals were amplified through actiCHamp (BrainProducts GmbH) and digitized at a sampling rate of 500&#x2009;Hz.</p>
</sec>
<sec id="sec7">
<title>Pre-processing and statistical analysis</title>
<p>BrainVision Analyzer 2.0 (BrainProducts GmbH) was used for pre-processing. EEG data was off-line re-referenced to average mastoids and filtered with a bandpass filter (a low cutoff of 0.5&#x2009;Hz and a high cutoff of 40&#x2009;Hz). Eye blinks and horizontal saccades were corrected semi-automatically using the Independent Component Analysis (ICA) procedure with infomax algorithm implemented in the BrainVision Analyzer. The data were segmented in the range of 0 to 1,000&#x2009;ms from the point at which a punch or kick touches the opponent. The baseline was corrected in the range of &#x2013;200 to 0&#x2009;ms. To extract ERPs, segments were averaged independently in each participant for each condition and then grand-averaged.</p>
<p>For the alpha-rhythm analysis, time-frequency analysis <italic>via</italic> Morlet wavelet transformation in the frequency range of 5&#x2013;30&#x2009;Hz was conducted. In doing so, Brain Vision Analyzer 2.0 (Brain-Products GmbH) was used to extract the alpha rhythm from the EEG signal. Wavelet transformation is one of the frequency analysis methods that can concurrently visualize time, power, and frequency, and this method is suitable for measuring long-lasting cognitive processes such as watching a video (<xref ref-type="bibr" rid="ref30">Muthukumaraswamy et al., 2004</xref>; <xref ref-type="bibr" rid="ref33">Perry and Bentin, 2010</xref>; <xref ref-type="bibr" rid="ref11">German-Sallo and Ciufudean, 2012</xref>; <xref ref-type="bibr" rid="ref21">Kwon and Lee, 2020</xref>). Through wavelet transformation, the square of the energy value and amplitude corresponding to the alpha rhythm were calculated across the groups and conditions. Then, one-way repeated measure ANOVA using the SPSS (Statistical Package for Social Science) was conducted to examine whether there are statistically significant differences among the conditions (slow-tempo background music, fast-tempo background music, and no background music).</p>
<p>In the ERP analysis, the same time range used in the alpha rhythm analysis was used to measure P300; from the point in which a punch or kick occurs up to 300&#x2009;ms after that hit or kick. The energy value corresponding to the applicable P300 value, amplitude (&#x03BC;V), was calculated and the calculated value was compared and analyzed for energy values according to each frequency between experimental conditions through one-way repeated ANOVA using SPSS.</p>
</sec>
</sec>
</sec>
<sec id="sec8" sec-type="results">
<title>Results</title>
<p>This study examined whether the presence of background music (absence of background music vs. presence of background music) leads to different levels of attentional process among the audience and whether the tempo (slow vs. fast) of background music is associated with different levels of attention. First, a one-way repeated ANOVA was conducted on the degree of alpha-rhythm inhibition associated with the three conditions (no background music (N-M), fast-tempo background music (F-M), slow-tempo background music (S-M)). The statistical analysis suggests that there is a significant difference among the conditions [<italic>F</italic> (2, 40)&#x2009;=&#x2009;5.745, <italic>p</italic>&#x2009;=&#x2009;0.006, <italic>&#x03B7;<sub>p</sub></italic><sup>2</sup>&#x2009;=&#x2009;0.223].</p>
<p>Bonferroni <italic>post-hoc</italic> analysis suggests that there is a significant difference (<italic>p</italic>&#x2009;&#x003C;&#x2009;0.05) between no background music (<italic>M</italic>&#x2009;=&#x2009;&#x2212;0.037&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.022) and fast-tempo background music conditions (<italic>M</italic>&#x2009;=&#x2009;&#x2212;0.171&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.042). In the fast-tempo background music condition, the alpha rhythm was suppressed to a greater degree compared to the no background music conditions, suggesting that the attentional process of the participants was activated to a greater degree in the fast background music condition. However, no statistically significant difference was found between the no background music and slow-tempo music conditions (<italic>M</italic>&#x2009;=&#x2009;&#x2212;0.072&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.021; <italic>p</italic>&#x2009;=&#x2009;0.11). There was also no significant difference between the fast- and slow-tempo background music conditions (<italic>p</italic>&#x2009;=&#x2009;0.16).</p>
<p>To examine whether the presence of background music leads to different levels of alpha-rhythm suppression, we combined the slow- and fast-tempo conditions (SF-M; <italic>M</italic>&#x2009;=&#x2009;&#x2212;0.121&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.024) and compared this with the no background music condition. A one-way repeated ANOVA suggests that there is a significant difference between the average value of slow and fast background music conditions and the value of no background music condition [<italic>F</italic> (2, 40)&#x2009;=&#x2009;5.745, <italic>p</italic>&#x2009;=&#x2009;0.006, <italic>&#x03B7;<sub>p</sub></italic><sup>2</sup>&#x2009;=&#x2009;0.223]. This indicates that the conditions with background music altogether were associated with a greater degree of alpha-rhythm suppression thus a greater degree of attentional process, compared to the no background music condition. The alpha-rhythm suppression value for each condition is illustrated in <xref rid="fig2" ref-type="fig">Figure 2</xref>.</p>
<fig position="float" id="fig2">
<label>Figure 2</label>
<caption>
<p>Alpha-rhythm suppression level by condition (<sup>&#x002A;</sup>denotes a statistically significant difference).</p>
</caption>
<graphic xlink:href="fpsyg-13-933497-g002.tif"/>
</fig>
<p>Second, another one-way repeated ANOVA was performed on P300 to conduct ERP analysis. The ANOVA revealed that there is a statistical difference among the P300 levels of the three conditions [<italic>F</italic> (2,40)&#x2009;=&#x2009;11.500, <italic>p</italic>&#x2009;=&#x2009;0.000, <italic>&#x03B7;<sub>p</sub></italic><sup>2</sup>&#x2009;=&#x2009;0.365]. Bonferroni <italic>post-hoc</italic> analysis indicated that there was a significant difference between the no background music condition (<italic>M</italic>&#x2009;=&#x2009;0.730&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.044), and fast tempo condition (<italic>M</italic>&#x2009;=&#x2009;1.015&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.041, <italic>p</italic>&#x2009;&#x003C;&#x2009;0.001). Additionally, a difference was observed between the no background music and slow tempo condition (<italic>M</italic>&#x2009;=&#x2009;0.895&#x2009;&#x03BC;V<sup>2</sup>, SD&#x2009;=&#x2009;0.039; <italic>p</italic>&#x2009;&#x003C;&#x2009;0.05). However, there was no significant difference between the slow- and the fast-tempo background music conditions (<italic>p</italic>&#x2009;=&#x2009;0.189).</p>
<p>To examine whether P300 levels differ based on the presence of background music, another repeated ANOVA was conducted on the average P300 value between the slow- and fast-background music conditions and the value of no background music condition. The test suggests the P300 level was significantly greater in the background music conditions (slow&#x2009;+&#x2009;fast tempo; <italic>M</italic>&#x2009;=&#x2009;0.955&#x2009;&#x03BC;V<sup>2</sup>, <italic>SD</italic>&#x2009;=&#x2009;0.027) compared to the no background music condition, indicating viewers were more attentive when background music was present than when background music was absent. The P300 value for each condition is illustrated in <xref rid="fig3" ref-type="fig">Figure 3</xref>.</p>
<fig position="float" id="fig3">
<label>Figure 3</label>
<caption>
<p>Event-related potential (ERP) inhibition level by condition (<sup>&#x002A;</sup>, <sup>&#x002A;&#x002A;</sup>denote statistically significant differences).</p>
</caption>
<graphic xlink:href="fpsyg-13-933497-g003.tif"/>
</fig>
</sec>
<sec id="sec9" sec-type="discussions">
<title>Discussion</title>
<p>The present study examined the impact of background music on the attentional process of a film audience. Particularly, the current study examined whether the presence and tempo of background music can lead to different levels of attention. Two EEG analyses (alpha-rhythm suppression and ERP) that respectively use two different indicators of the attentional process were used.</p>
<p>The analyses suggest that the presence of background music had a positive effect on the audience&#x2019;s attentional process. The two analyses uniformly showed that greater levels of attentional processes were involved in the conditions with background music (compared to the condition without any background music). This implies that the audience can pay more attention to the visual image of the footage when background music is available. Previous studies examining the relationship between background music and attention have shown inconsistent results. That is, background music enhanced attention in some cases (<xref ref-type="bibr" rid="ref1">Allan, 2006</xref>; <xref ref-type="bibr" rid="ref23">Lavack et al., 2008</xref>) but not in others (<xref ref-type="bibr" rid="ref46">Wakshlag et al., 1982</xref>; <xref ref-type="bibr" rid="ref5">Chou, 2010</xref>; <xref ref-type="bibr" rid="ref42">Shih et al., 2012</xref>).</p>
<p>From the perspective of limited capacity theory (<xref ref-type="bibr" rid="ref15">Kahneman, 1973</xref>), attention is a limited cognitive resource. Therefore, background music during a cognitive activity (e.g., reading) can consume one&#x2019;s cognitive resources and ultimately reduce one&#x2019;s attention on the task. Our findings, however, indicate that in a situation in which cognitively demanding tasks are not required, background music can increase attention level, rather than depleting the restricted attentional resources.</p>
<p>We think that this finding may be related to the elevation of the audience&#x2019;s arousal level. As noted earlier, background music can induce emotional responses from the audience by generating the overall mood of a film (<xref ref-type="bibr" rid="ref14">Holman, 2012</xref>). A prior study found that the state of being affectively aroused can influence cognitive processes, whereby attention to visual stimuli is enhanced (<xref ref-type="bibr" rid="ref27">McConnell and Shore, 2011</xref>). From the perspective of affective arousal, the mood changes that resulted from our background music may have heightened the affective arousal levels of the participants and ultimately encouraged their attention on the stimuli scenes.</p>
<p>Our findings also indicate that the tempo did <italic>not</italic> influence the level of attention. There was no statistically significant difference between the slow- and fast-tempo background music conditions either in the degrees of alpha-rhythm inhibition or P300. This null finding somewhat contradicts our prediction and findings from previous studies that different tempos would be associated with engendering different levels of attention. The null findings however should be considered with caution, as our findings are based on a laboratory setting in which the entire viewing environment was artificially manipulated and all musical variables were controlled for. In the real world, the attentional process of an audience is not just influenced by audio-visual stimulations of a few seconds long but by various factors that form the overall viewing experience, such as dramaturgy and storytelling. Therefore, various possibilities should carefully be considered in addressing the effect of tempo.</p>
<p>As suggested by our findings, the tempo of background music may be an insignificant factor in the audience&#x2019;s attentional process. Musical factors other than the tempo of background music, such as pitch range and tonality, may play a more important role in inducing attention. Alternatively, the tempo may not have a decisive effect by itself but may only work through interaction with other elements. For instance, the tempo may influence the attentional process only in particular conditions in which a fast tempo meets a high range or when the rhythm of the video and the speed of the tempo harmoniously match one another. Future studies should consider these possibilities when further examining the influence of tempo.</p>
<p>Additionally, the tempo of background music used in our study might not have been perceived as fast or slow enough for the participants. The tempo used in the slow and fast background music conditions were <italic>allegro</italic> and <italic>largo</italic>, respectively, which are clearly distinguishable from one another and considered relatively slow and fast according to music theory. However, some participants may not have sufficiently perceived the difference in the tempo, as such judgments can vary across individuals. Future studies therefore may need to consider participants&#x2019; individual differences in perceiving different tempos and use a more diverse set of tempos when examining the effect of tempo.</p>
<p>To examine the effect of background music, the present study used two analyses that, respectively, use alpha-rhythm suppression and P300. Most of the findings from the two analyses were identical and offered more robust evidence regarding the impact of background music on the attentional process. One difference however was found among these analyses: while ERP analysis revealed a statistically significant difference in the no-music and slow-tempo music conditions, alpha-rhythm suppression analysis did not. Such a difference in the findings may be due to the different capabilities each analysis is known to offer. Namely, both ERP and alpha-rhythm suppression analyses allow the measuring of a real-time cognitive response to a stimulus, yet ERP is considered a more sensitive tool, as the response is measured particularly at the peak (in 1/1,000&#x2009;s unit time after the stimulus is presented), as opposed to the alpha-rhythm suppression approach, in which the measurement lasts for 1&#x2009;s and is then averaged over that whole 1&#x2009;s. Perhaps the difference in neuro response elicited in the no-background music and slow music conditions may have been extremely small, and the ERP that captured responses at a particular point of time was sensitive enough (more than alpha-rhythm suppression) to detect the difference in these two conditions.</p>
<p>Despite the growing importance of sound in visual media, few neurophysiological studies to date have examined the effect of sound on the audience. This study presents empirical data from an EEG experiment and offers theoretical foundations for the effect of background music on the audience. More research that takes other factors into consideration is needed to provide a more accurate and richer discussion on this topic. However, we hope that the findings of the present study serve as one of the initial data to offer a theoretical base for the effect of background music and further contribute to the production of audience-oriented media content.</p>
</sec>
<sec id="sec10" sec-type="data-availability">
<title>Data availability statement</title>
<p>The data that support the findings of this study are available from the corresponding author, SL, upon reasonable request.</p>
</sec>
<sec id="sec11">
<title>Ethics statement</title>
<p>The studies involving human participants were reviewed and approved by Young-Min Choi, Dong-A University. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="sec12">
<title>Author contributions</title>
<p>Y-SK has conceived and designed the analysis, collected the data, performed the analysis, and written the paper. JL has conducted the experiment and written the paper. SL has conceived and designed the analysis, written, reviewed, and edited the paper. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="sec13" sec-type="funding-information">
<title>Funding</title>
<p>This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT; No. NRF-2020R1G1A1101384).</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="sec100" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="ref1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allan</surname> <given-names>D.</given-names></name></person-group> (<year>2006</year>). <article-title>Effects of popular music in advertising on attention and memory</article-title>. <source>J. Advert. Res.</source> <volume>46</volume>, <fpage>434</fpage>&#x2013;<lpage>444</lpage>. doi: <pub-id pub-id-type="doi">10.2501/S0021849906060491</pub-id></citation></ref>
<ref id="ref2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ausin</surname> <given-names>J. M.</given-names></name> <name><surname>Bigne</surname> <given-names>E.</given-names></name> <name><surname>Mar&#x00ED;n</surname> <given-names>J.</given-names></name> <name><surname>Guixeres</surname> <given-names>J.</given-names></name> <name><surname>Alcaniz</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>The background music-content congruence of TV advertisements: a neurophysiological study</article-title>. <source>Eur. Res. Manag. Bus. Econ.</source> <volume>27</volume>:<fpage>100154</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.iedeen.2021.100154</pub-id></citation></ref>
<ref id="ref3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berger</surname> <given-names>H.</given-names></name></person-group> (<year>1929</year>). <article-title>&#x00DC;ber das elektroenkephalogramm des menschen</article-title>. <source>Arch. Psychiatr. Nervenkr.</source> <volume>87</volume>, <fpage>527</fpage>&#x2013;<lpage>570</lpage>. doi: <pub-id pub-id-type="doi">10.1007/BF01797193</pub-id></citation></ref>
<ref id="ref4"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Burns</surname> <given-names>R. B.</given-names></name> <name><surname>Dobson</surname> <given-names>C. B.</given-names></name></person-group> (<year>2012</year>). <source>Experimental Psychology: Research Methods and Statistics</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Springer Science &#x0026; Business Media</publisher-name>.</citation></ref>
<ref id="ref5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chou</surname> <given-names>P. T. M.</given-names></name></person-group> (<year>2010</year>). <article-title>Attention drainage effect: how background music effects concentration in Taiwanese college students</article-title>. <source>J. Scholarship Teach. Learn.</source> <volume>10</volume>, <fpage>36</fpage>&#x2013;<lpage>46</lpage>.</citation></ref>
<ref id="ref6"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>M. X.</given-names></name></person-group> (<year>2014</year>). <source>Analyzing Neural Time Series Data: Theory and Practice</source>. <publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>MIT Press</publisher-name>.</citation></ref>
<ref id="ref7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Du</surname> <given-names>M.</given-names></name> <name><surname>Jiang</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Man</surname> <given-names>D.</given-names></name> <name><surname>Jiang</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>The effects of background music on neural responses during reading comprehension</article-title>. <source>Sci. Rep.</source> <volume>10</volume>, <fpage>1</fpage>&#x2013;<lpage>10</lpage>. doi: <pub-id pub-id-type="doi">10.1038/s41598-020-75623-3</pub-id></citation></ref>
<ref id="ref8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fern&#x00E1;ndez-Sotos</surname> <given-names>A.</given-names></name> <name><surname>Fern&#x00E1;ndez-Caballero</surname> <given-names>A.</given-names></name> <name><surname>Latorre</surname> <given-names>J. M.</given-names></name></person-group> (<year>2016</year>). <article-title>Influence of tempo and rhythmic unit in musical emotion regulation</article-title>. <source>Front. Comput. Neurosci.</source> <volume>10</volume>:<fpage>80</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fncom.2016.00080</pub-id></citation></ref>
<ref id="ref9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>K. M. G.</given-names></name> <name><surname>Foxe</surname> <given-names>J. J.</given-names></name> <name><surname>Murray</surname> <given-names>M. M.</given-names></name> <name><surname>Higgins</surname> <given-names>B. A.</given-names></name> <name><surname>Javitt</surname> <given-names>D. C.</given-names></name> <name><surname>Schroeder</surname> <given-names>C. E.</given-names></name></person-group> (<year>2001</year>). <article-title>Attention-dependent suppression of distracter visual input can be cross-modally cued as indexed by anticipatory parieto&#x2013;occipital alpha-band oscillations</article-title>. <source>Cogn. Brain Res.</source> <volume>12</volume>, <fpage>145</fpage>&#x2013;<lpage>152</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0926-6410(01)00034-9</pub-id>, PMID: <pub-id pub-id-type="pmid">11489617</pub-id></citation></ref>
<ref id="ref10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gagnon</surname> <given-names>L.</given-names></name> <name><surname>Peretz</surname> <given-names>I.</given-names></name></person-group> (<year>2003</year>). <article-title>Mode and tempo relative contributions to &#x201C;happy-sad&#x201D; judgements in equitone melodies</article-title>. <source>Cognit. Emot.</source> <volume>17</volume>, <fpage>25</fpage>&#x2013;<lpage>40</lpage>. doi: <pub-id pub-id-type="doi">10.1080/02699930302279</pub-id>, PMID: <pub-id pub-id-type="pmid">29715736</pub-id></citation></ref>
<ref id="ref11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>German-Sallo</surname> <given-names>Z.</given-names></name> <name><surname>Ciufudean</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <article-title>Waveform-adapted wavelet denoising of ECG signals</article-title>. <source>Adv. Math. Computat. Methods</source> <fpage>172</fpage>&#x2013;<lpage>175</lpage>.</citation></ref>
<ref id="ref12"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Goodwin</surname> <given-names>C. J.</given-names></name></person-group> (<year>2009</year>). <source>Research in Psychology: Methods and Design</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>John Wiley &#x0026; Sons</publisher-name>.</citation></ref>
<ref id="ref13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hillyard</surname> <given-names>S. A.</given-names></name> <name><surname>Kutas</surname> <given-names>M.</given-names></name></person-group> (<year>1983</year>). <article-title>Electrophysiology of cognitive processing</article-title>. <source>Annu. Rev. Psychol.</source> <volume>34</volume>, <fpage>33</fpage>&#x2013;<lpage>61</lpage>. doi: <pub-id pub-id-type="doi">10.1146/annurev.ps.34.020183.000341</pub-id></citation></ref>
<ref id="ref14"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Holman</surname> <given-names>H.</given-names></name></person-group> (<year>2012</year>). <source>Sound for Film and Television</source>. <publisher-loc>Oxford</publisher-loc>: <publisher-name>Taylor &#x0026; Francis</publisher-name>.</citation></ref>
<ref id="ref001"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Iwaki</surname> <given-names>T.</given-names></name> <name><surname>Hayashi</surname> <given-names>M.</given-names></name> <name><surname>Hori</surname> <given-names>T.</given-names></name></person-group> (<year>1997</year>). <article-title>Changes in alpha band EEG activity in the frontal area after stimulation with music of different affective content</article-title>. <source>Percept. Mot. Skills</source> <volume>84</volume>, <fpage>515</fpage>&#x2013;526. doi: <pub-id pub-id-type="doi">10.2466/pms.1997.84.2.515</pub-id></citation></ref>
<ref id="ref15"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Kahneman</surname> <given-names>D.</given-names></name></person-group> (<year>1973</year>). <source>Attention and Effort</source>. <publisher-loc>Englewood Cliffs, NJ</publisher-loc>: <publisher-name>Prentice Hall</publisher-name>.</citation></ref>
<ref id="ref16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kallinen</surname> <given-names>K.</given-names></name></person-group> (<year>2002</year>). <article-title>Reading news from a pocket computer in a distracting environment: effects of the tempo of background music</article-title>. <source>Comput. Hum. Behav.</source> <volume>18</volume>, <fpage>537</fpage>&#x2013;<lpage>551</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0747-5632(02)00005-5</pub-id></citation></ref>
<ref id="ref17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>K&#x00E4;mpfe</surname> <given-names>J.</given-names></name> <name><surname>Sedlmeier</surname> <given-names>P.</given-names></name> <name><surname>Renkewitz</surname> <given-names>F.</given-names></name></person-group> (<year>2011</year>). <article-title>The impact of background music on adult listeners: a meta-analysis</article-title>. <source>Psychol. Music</source> <volume>39</volume>, <fpage>424</fpage>&#x2013;<lpage>448</lpage>. doi: <pub-id pub-id-type="doi">10.1177/0305735610376261</pub-id></citation></ref>
<ref id="ref18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klimesch</surname> <given-names>W.</given-names></name></person-group> (<year>2012</year>). <article-title>Alpha-band oscillations, attention, and controlled access to stored information</article-title>. <source>Trends Cogn. Sci.</source> <volume>16</volume>, <fpage>606</fpage>&#x2013;<lpage>617</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.tics.2012.10.007</pub-id>, PMID: <pub-id pub-id-type="pmid">23141428</pub-id></citation></ref>
<ref id="ref19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klimesch</surname> <given-names>W.</given-names></name> <name><surname>Doppelmayr</surname> <given-names>M.</given-names></name> <name><surname>Pachinger</surname> <given-names>T.</given-names></name> <name><surname>Ripper</surname> <given-names>B.</given-names></name></person-group> (<year>1997</year>). <article-title>Brain oscillations and human memory: EEG correlates in the upper alpha and theta band</article-title>. <source>Neurosci. Lett.</source> <volume>238</volume>, <fpage>9</fpage>&#x2013;<lpage>12</lpage>. doi: <pub-id pub-id-type="doi">10.1016/S0304-3940(97)00771-4</pub-id>, PMID: <pub-id pub-id-type="pmid">9464642</pub-id></citation></ref>
<ref id="ref20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kwon</surname> <given-names>Y.</given-names></name> <name><surname>Ha</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Cognitive processing of digital video: focusing on the audio and video elements of sports broadcast. PREVIEW: the Korean journal of digital moving</article-title>. <source>Image</source> <volume>16</volume>, <fpage>7</fpage>&#x2013;<lpage>25</lpage>.</citation></ref>
<ref id="ref21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kwon</surname> <given-names>Y.</given-names></name> <name><surname>Lee</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Cognitive processing of sound effects in television sports boadcasting</article-title>. <source>J. Radio Audio Media</source> <volume>27</volume>, <fpage>93</fpage>&#x2013;<lpage>118</lpage>. doi: <pub-id pub-id-type="doi">10.1080/19376529.2018.1541899</pub-id></citation></ref>
<ref id="ref22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kwon</surname> <given-names>Y.</given-names></name> <name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Lee</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>The effect of background music on the viewer&#x2019;s empathy and immersion: EEG mu-rhythm analysis using wavelet transform. PREVIEW: the Korean journal of digital moving</article-title>. <source>Image</source> <volume>18</volume>, <fpage>7</fpage>&#x2013;<lpage>30</lpage>.</citation></ref>
<ref id="ref23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lavack</surname> <given-names>A. M.</given-names></name> <name><surname>Thakor</surname> <given-names>M. V.</given-names></name> <name><surname>Bottausci</surname> <given-names>I.</given-names></name></person-group> (<year>2008</year>). <article-title>Music-brand congruency in highand low-cognition radio advertising</article-title>. <source>Int. J. Advert.</source> <volume>27</volume>, <fpage>549</fpage>&#x2013;<lpage>568</lpage>. doi: <pub-id pub-id-type="doi">10.2501/S0265048708080141</pub-id></citation></ref>
<ref id="ref24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>E. K.</given-names></name> <name><surname>Lee</surname> <given-names>S. E.</given-names></name> <name><surname>Kwon</surname> <given-names>Y. S.</given-names></name></person-group> (<year>2020</year>). <article-title>The effect of lyrical and non-lyrical background music on different types of language processing-an ERP study</article-title>. <source>Korean J. Cogn. Sci.</source> <volume>31</volume>, <fpage>155</fpage>&#x2013;<lpage>178</lpage>. doi: <pub-id pub-id-type="doi">10.19066/COGSCI.2020.31.4.003</pub-id></citation></ref>
<ref id="ref25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>Wei</surname> <given-names>D.</given-names></name> <name><surname>Li</surname> <given-names>Q.</given-names></name> <name><surname>Yuan</surname> <given-names>G.</given-names></name> <name><surname>Wu</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2018b</year>). <article-title>Effects of musical tempo on musicians&#x2019; and non-musicians&#x2019; emotional experience when listening to music</article-title>. <source>Front. Psychol.</source> <volume>9</volume>:<fpage>2118</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fpsyg.2018.02118</pub-id>, PMID: <pub-id pub-id-type="pmid">30483173</pub-id></citation></ref>
<ref id="ref26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>S.</given-names></name> <name><surname>Yu</surname> <given-names>N.</given-names></name> <name><surname>Peng</surname> <given-names>Y.</given-names></name> <name><surname>Wen</surname> <given-names>Y.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2018a</year>). <article-title>Effects of musical tempo on musicians&#x2019; and non-musicians&#x2019; emotional experience when listening to music</article-title>. <source>Front. Psychol.</source> <volume>9</volume>:1. doi: <pub-id pub-id-type="doi">10.3389/fpsyt.2018.00001</pub-id>, PMID: <pub-id pub-id-type="pmid">29410632</pub-id></citation></ref>
<ref id="ref27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McConnell</surname> <given-names>M. M.</given-names></name> <name><surname>Shore</surname> <given-names>D. I.</given-names></name></person-group> (<year>2011</year>). <article-title>Upbeat and happy: arousal as an important factor in studying attention</article-title>. <source>Cognit. Emot.</source> <volume>25</volume>, <fpage>1184</fpage>&#x2013;<lpage>1195</lpage>. doi: <pub-id pub-id-type="doi">10.1080/02699931.2010.524396</pub-id>, PMID: <pub-id pub-id-type="pmid">22017613</pub-id></citation></ref>
<ref id="ref28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mitchell</surname> <given-names>W. J.</given-names></name></person-group> (<year>2005</year>). <article-title>There are no visual media</article-title>. <source>J. Vis. Cult.</source> <volume>4</volume>, <fpage>257</fpage>&#x2013;<lpage>266</lpage>. doi: <pub-id pub-id-type="doi">10.1177/1470412905054673</pub-id></citation></ref>
<ref id="ref29"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Mulert</surname> <given-names>C.</given-names></name> <name><surname>Lemieux</surname> <given-names>L.</given-names></name></person-group> (<year>2009</year>). <source>EEG&#x2013;fMRI: Physiological Basis, Technique, and Applications</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer Science &#x0026; Business Media</publisher-name>.</citation></ref>
<ref id="ref30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Muthukumaraswamy</surname> <given-names>S. D.</given-names></name> <name><surname>Johnson</surname> <given-names>B. W.</given-names></name> <name><surname>McNair</surname> <given-names>N. A.</given-names></name></person-group> (<year>2004</year>). <article-title>Mu rhythm modulation during observation of an object-directed grasp</article-title>. <source>Cogn. Brain Res.</source> <volume>19</volume>, <fpage>195</fpage>&#x2013;<lpage>201</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.cogbrainres.2003.12.001</pub-id>, PMID: <pub-id pub-id-type="pmid">15019715</pub-id></citation></ref>
<ref id="ref31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>T.</given-names></name> <name><surname>Grahn</surname> <given-names>J. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Mind your music: the effects of music-induced mood and arousal across different memory tasks</article-title>. <source>Psychomusicol. Music Mind Brain</source> <volume>27</volume>, <fpage>81</fpage>&#x2013;<lpage>94</lpage>. doi: <pub-id pub-id-type="doi">10.1037/pmu0000178</pub-id></citation></ref>
<ref id="ref32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palit</surname> <given-names>H. C.</given-names></name> <name><surname>Aysia</surname> <given-names>D. A. Y.</given-names></name></person-group> (<year>2015</year>). <article-title>The effect of pop musical tempo during post treadmill exercise recovery time</article-title>. <source>Procedia Manuf.</source> <volume>4</volume>, <fpage>17</fpage>&#x2013;<lpage>22</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.promfg.2015.11.009</pub-id></citation></ref>
<ref id="ref33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perry</surname> <given-names>A.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2010</year>). <article-title>Does focusing on hand-grasping intentions modulate electroencephalogram &#x03BC; and &#x03B1; suppressions?</article-title> <source>Neuroreport</source> <volume>21</volume>, <fpage>1050</fpage>&#x2013;<lpage>1054</lpage>. doi: <pub-id pub-id-type="doi">10.1097/WNR.0b013e32833fcb71</pub-id>, PMID: <pub-id pub-id-type="pmid">20838261</pub-id></citation></ref>
<ref id="ref34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Perry</surname> <given-names>A.</given-names></name> <name><surname>Stein</surname> <given-names>L.</given-names></name> <name><surname>Bentin</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Motor and attentional mechanisms involved in social interaction&#x2014;evidence from mu and alpha EEG suppression</article-title>. <source>NeuroImage</source> <volume>58</volume>, <fpage>895</fpage>&#x2013;<lpage>904</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neuroimage.2011.06.060</pub-id>, PMID: <pub-id pub-id-type="pmid">21742042</pub-id></citation></ref>
<ref id="ref35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Picton</surname> <given-names>T. W.</given-names></name></person-group> (<year>1992</year>). <article-title>The P300 wave of the human event-related potential</article-title>. <source>J. Clin. Neurophysiol.</source> <volume>9</volume>, <fpage>456</fpage>&#x2013;<lpage>479</lpage>. doi: <pub-id pub-id-type="doi">10.1097/00004691-199210000-00002</pub-id></citation></ref>
<ref id="ref36"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Polich</surname> <given-names>J.</given-names></name></person-group> (<year>2003</year>). &#x201C;<article-title>Theoretical overview of P3a and P3b,</article-title>&#x201D; in <source>Detection of Change</source> (<publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>83</fpage>&#x2013;<lpage>98</lpage>.</citation></ref>
<ref id="ref37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polich</surname> <given-names>J.</given-names></name></person-group> (<year>2007</year>). <article-title>Updating P300: an integrative theory of P3a and P3b</article-title>. <source>Clin. Neurophysiol.</source> <volume>118</volume>, <fpage>2128</fpage>&#x2013;<lpage>2148</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.clinph.2007.04.019</pub-id>, PMID: <pub-id pub-id-type="pmid">17573239</pub-id></citation></ref>
<ref id="ref38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Polich</surname> <given-names>J.</given-names></name> <name><surname>Criado</surname> <given-names>J. R.</given-names></name></person-group> (<year>2006</year>). <article-title>Neuropsychology and neuropharmacology of P3a and P3b</article-title>. <source>Int. J. Psychophysiol.</source> <volume>60</volume>, <fpage>172</fpage>&#x2013;<lpage>185</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.ijpsycho.2005.12.012</pub-id>, PMID: <pub-id pub-id-type="pmid">16510201</pub-id></citation></ref>
<ref id="ref39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Proverbio</surname> <given-names>A.</given-names></name> <name><surname>Nasi</surname> <given-names>V. L.</given-names></name> <name><surname>Arcari</surname> <given-names>L. A.</given-names></name> <name><surname>Benedetto</surname> <given-names>F.</given-names></name> <name><surname>Guardamagna</surname> <given-names>M.</given-names></name> <name><surname>Gazzola</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>The effect of background music on episodic memory and autonomic responses: listening to emotionally touching music enhances facial memory capacity</article-title>. <source>Sci. Rep.</source> <volume>5</volume>:<fpage>15219</fpage>. doi: <pub-id pub-id-type="doi">10.1038/srep15219</pub-id>, PMID: <pub-id pub-id-type="pmid">26469712</pub-id></citation></ref>
<ref id="ref40"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Sanei</surname> <given-names>S.</given-names></name> <name><surname>Chambers</surname> <given-names>J. A.</given-names></name></person-group> (<year>2013</year>). <source>EEG Signal Processing</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>John Wiley &#x0026; Sons</publisher-name>.</citation></ref>
<ref id="ref41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sauseng</surname> <given-names>P.</given-names></name> <name><surname>Klimesch</surname> <given-names>W.</given-names></name></person-group> (<year>2008</year>). <article-title>What does phase information of oscillatory brain activity tell us about cognitive processes?</article-title> <source>Neurosci. Biobehav. Rev.</source> <volume>32</volume>, <fpage>1001</fpage>&#x2013;<lpage>1013</lpage>. doi: <pub-id pub-id-type="doi">10.1016/j.neubiorev.2008.03.014</pub-id>, PMID: <pub-id pub-id-type="pmid">18499256</pub-id></citation></ref>
<ref id="ref42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shih</surname> <given-names>Y. N.</given-names></name> <name><surname>Huang</surname> <given-names>R. H.</given-names></name> <name><surname>Chiang</surname> <given-names>H. Y.</given-names></name></person-group> (<year>2012</year>). <article-title>Background music: effects on attention performance</article-title>. <source>Work</source> <volume>42</volume>, <fpage>573</fpage>&#x2013;<lpage>578</lpage>. doi: <pub-id pub-id-type="doi">10.3233/WOR-2012-1410</pub-id></citation></ref>
<ref id="ref43"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stangor</surname> <given-names>C.</given-names></name></person-group> (<year>2014</year>). <source>Research Methods for the Behavioral Sciences</source>. <publisher-loc>Boston, MA</publisher-loc>: <publisher-name>Cengage Learning</publisher-name>.</citation></ref>
<ref id="ref44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sur</surname> <given-names>S.</given-names></name> <name><surname>Sinha</surname> <given-names>V. K.</given-names></name></person-group> (<year>2009</year>). <article-title>Event-related potential: an overview</article-title>. <source>Ind. Psychiatry J.</source> <volume>18</volume>, <fpage>70</fpage>&#x2013;<lpage>73</lpage>. doi: <pub-id pub-id-type="doi">10.4103/0972-6748.57865</pub-id>, PMID: <pub-id pub-id-type="pmid">21234168</pub-id></citation></ref>
<ref id="ref45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Uhm</surname> <given-names>J. P.</given-names></name> <name><surname>Lee</surname> <given-names>H. W.</given-names></name> <name><surname>Han</surname> <given-names>J. W.</given-names></name> <name><surname>Kim</surname> <given-names>D. K.</given-names></name></person-group> (<year>2021</year>). <article-title>Effect of background music and hierarchy-of-effects in watching women&#x2019;s running shoes advertisements</article-title>. <source>Int. J. Sports Mark. Spons.</source> <volume>23</volume>, <fpage>41</fpage>&#x2013;<lpage>58</lpage>. doi: <pub-id pub-id-type="doi">10.1108/IJSMS-09-2020-0159</pub-id></citation></ref>
<ref id="ref46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wakshlag</surname> <given-names>J. J.</given-names></name> <name><surname>Reitz</surname> <given-names>R.</given-names></name> <name><surname>Zillmann</surname> <given-names>D.</given-names></name></person-group> (<year>1982</year>). <article-title>Selective exposure to and acquisition of information from educational television programs as a function of appeal and tempo of background music</article-title>. <source>J. Educ. Psychol.</source> <volume>74</volume>, <fpage>666</fpage>&#x2013;<lpage>677</lpage>. doi: <pub-id pub-id-type="doi">10.1037/0022-0663.74.5.666</pub-id></citation></ref>
<ref id="ref47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Fu</surname> <given-names>X.</given-names></name></person-group> (<year>2015</year>). <article-title>The influence of background music of video games on immersion</article-title>. <source>J. Psychol. Psychother.</source> <volume>5</volume>:<fpage>4</fpage>. doi: <pub-id pub-id-type="doi">10.4172/2161-0487.1000191</pub-id></citation></ref>
</ref-list>
</back>
</article>