<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Psychol.</journal-id>
<journal-title>Frontiers in Psychology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Psychol.</abbrev-journal-title>
<issn pub-type="epub">1664-1078</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fpsyg.2022.780657</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Psychology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>The Impact of Different Types of Auditory Warnings on Working Memory</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Lei</surname> <given-names>Zhaoli</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/1485319/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ma</surname> <given-names>Shu</given-names></name>
</contrib>
<contrib contrib-type="author">
<name><surname>Li</surname> <given-names>Hongting</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/694702/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Yang</surname> <given-names>Zhen</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1487491/overview"/>
</contrib>
</contrib-group>
<aff><institution>Department of Psychology, Zhejiang Sci-Tech University</institution>, <addr-line>Hangzhou</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Jerker R&#x00F6;nnberg, Link&#x00F6;ping University, Sweden</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Nicole Dargue, Griffith University, Australia; Patrik S&#x00F6;rqvist, University of G&#x00E4;vle, Sweden</p></fn>
<corresp id="c001">&#x002A;Correspondence: Zhen Yang, <email>yangzhen@zstu.edu.cn</email></corresp>
<fn fn-type="other" id="fn004"><p>This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Psychology</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>25</day>
<month>02</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>13</volume>
<elocation-id>780657</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>10</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>01</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Lei, Ma, Li and Yang.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Lei, Ma, Li and Yang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Auditory warnings have been shown to interfere with verbal working memory. However, the impact of different types of auditory warnings on working memory tasks must be further researched. This study investigated how different kinds of auditory warnings interfered with verbal and spatial working memory. Experiment 1 tested the potential interference of auditory warnings with verbal working memory. Experiment 2 tested the potential interference of auditory warnings with spatial working memory. Both experiments used a 3 &#x00D7; 3 mixed design: auditory warning type (auditory icons, earcons, or spearcons) was between groups, and task condition (no-warning, identify-warning, or ignore-warning) was within groups. In Experiment 1, earcons and spearcons but not auditory icons worsened the performance on the verbal serial recall task in the identify-warning condition, compared with that in the no-warning or ignore-warning conditions. In Experiment 2, only identifying earcons worsened the performance on the location recall task compared with performance without auditory warnings or when auditory warnings were ignored. Results are discussed from the perspective of working memory resource interference, and their practical application in the selection and design of auditory warning signals is involved.</p>
</abstract>
<kwd-group>
<kwd>auditory warnings</kwd>
<kwd>auditory icons</kwd>
<kwd>earcons</kwd>
<kwd>spearcons</kwd>
<kwd>working memory</kwd>
<kwd>interference</kwd>
</kwd-group>
<contract-num rid="cn001">31900768</contract-num>
<contract-sponsor id="cn001">National Natural Science Foundation of China<named-content content-type="fundref-id">10.13039/501100001809</named-content></contract-sponsor>
<counts>
<fig-count count="8"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="71"/>
<page-count count="15"/>
<word-count count="11615"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>Auditory warnings include speech and non-speech sounds. Speech auditory warnings are mainly used to display content information and are widely applied in multimedia interfaces, telephone communication systems, vehicle systems, medical treatment, and special populations with blind or low eyesight. However, their usage is limited by poor confidentiality and slow processing speed due to the need to listen to the full sentence to understand the meaning. By contrast, non-speech sounds are preferred due to privacy concerns or in situations where specific speech prompts are not required (<xref ref-type="bibr" rid="B35">Isherwood and McKeown, 2017</xref>). Compared with speech auditory warnings, non-speech auditory warnings have better confidentiality, speech independence, and wide applicability in different countries and dialects.</p>
<sec id="S1.SS1">
<title>Auditory Icons, Earcons, and Spearcons (Speech-Based Earcons)</title>
<p>Common representations of non-speech interfaces mainly include auditory icons and earcons. Auditory icons are sounds used to represent their associated events or attributes in daily life (<xref ref-type="bibr" rid="B28">Gaver, 1989</xref>), which refer to conveying computer operations or events by imitating familiar sounds of real-world events. They are usually relatively brief and icon-like (<xref ref-type="bibr" rid="B44">Larsson and Niemand, 2015</xref>; <xref ref-type="bibr" rid="B1">Amer and Johnson, 2018</xref>). For example, the sound of a broken plate is used to represent the operation of deleting a file, and the sound of a dot-matrix printer or typewriter signifies a printing operation.</p>
<p>Earcons are abstract short non-verbal auditory information with musical nature used to provide information and feedback on computer operations or interactions (<xref ref-type="bibr" rid="B15">Blattner et al., 1989</xref>; <xref ref-type="bibr" rid="B17">Brewster et al., 1993</xref>; <xref ref-type="bibr" rid="B2">Amer et al., 2013</xref>; <xref ref-type="bibr" rid="B44">Larsson and Niemand, 2015</xref>). For example, the rising &#x201C;login&#x201D; melody and the descending &#x201C;logout&#x201D; melody in the Windows operating system are formed by different combinations of high and low tones. Earcons can be mapped to any object, operation, or interaction event, and are designed as a series of mappings to represent hierarchical structure by manipulating their parameters, such as timbre and pitch (<xref ref-type="bibr" rid="B27">Garzonis et al., 2009</xref>).</p>
<p>To compensate for the weaknesses of traditional non-speech auditory cues, researchers developed spearcons, a compromise between short non-speech stimuli and full speech stimuli. These signals are short, time-compressed spoken words or speech phrases that are sped up even to the point where they are no longer considered speech. Spearcons can directly and quickly convey their meaning and relevant information to the listener (<xref ref-type="bibr" rid="B58">Petocz et al., 2008</xref>; <xref ref-type="bibr" rid="B67">Walker et al., 2013</xref>; <xref ref-type="bibr" rid="B36">Jeon, 2015</xref>), have good learnability, and can remarkably improve the efficiency and accuracy of menu navigation search (<xref ref-type="bibr" rid="B57">Palladino and Walker, 2007</xref>; <xref ref-type="bibr" rid="B22">Dingler et al., 2008</xref>; <xref ref-type="bibr" rid="B67">Walker et al., 2013</xref>). Compared with earcons, spearcons can provide a direct mapping between sounds and menu items, flexibly covering more content domains, and thus having better flexibility and generation. Therefore, spearcons have been studied and applied in some fields, such as patient monitoring alarms and menu navigation (<xref ref-type="bibr" rid="B67">Walker et al., 2013</xref>; <xref ref-type="bibr" rid="B46">Li et al., 2017</xref>; <xref ref-type="bibr" rid="B61">Sanderson et al., 2019</xref>).</p>
</sec>
<sec id="S1.SS2">
<title>Potential Hazards of Auditory Warnings and Their Impact on Working Memory</title>
<p>Auditory warnings have become ubiquitous in daily work environments. Although they improve the efficiency of human-computer interaction, their potential hazards must be considered. First, the environment in which auditory warnings are used may require a high degree of concentration of the operator. Despite conveying important information, these signals might be not necessarily urgent. In addition, not every warning sound is important or urgent for every operator in the same environment. Given that sound signals are omnidirectional and forced hearing, people always easily get distracted and drawn to sounds that are not relevant or meaningful at the moment, even when they try to focus on something important (<xref ref-type="bibr" rid="B11">Banbury et al., 2001</xref>; <xref ref-type="bibr" rid="B68">Watson et al., 2004</xref>). When the alarm sounds, the irrelevant operator may be engaged in a cognitively demanding task, such as driving, intensive care, or surgery. Once attracted to the sound information, the operator may not focus on the important tasks. This situation may create some potential hazards. It has been found that an auditory warning of a relatively small event may lead to errors in the input of coordinates in navigation or weapon delivery systems, resulting in potentially serious consequences (<xref ref-type="bibr" rid="B11">Banbury et al., 2001</xref>). Furthermore, <xref ref-type="bibr" rid="B43">Lacherez et al. (2016)</xref> mentioned that auditory warning identification may compete with other cognitive processes for working memory resources and result in poor performance on other tasks. Many situations relying on auditory display assistance are related to user&#x2019;s defects in performing dual tasks, such as in patients with Parkinson&#x2019;s disease (<xref ref-type="bibr" rid="B3">Ashburn et al., 2001</xref>) or head injuries (<xref ref-type="bibr" rid="B29">Hart et al., 2002</xref>; <xref ref-type="bibr" rid="B30">Hein et al., 2005</xref>). <xref ref-type="bibr" rid="B3">Ashburn et al. (2001)</xref> found that patients with Parkinson&#x2019;s disease who are prone to fall also perform poorly in dual tasks. Hence, auditory warnings and auxiliary systems must aim to control the specific cognitive demands (<xref ref-type="bibr" rid="B3">Ashburn et al., 2001</xref>), to avoid additional negative impacts on users or reduce the availability of systems.</p>
<p>Cognitive tasks, which usually rely on a person&#x2019;s working memory, have been increasingly involved in many human activities. The impact of auditory warnings on the operator&#x2019;s task performance is mainly concentrated on the working memory. Working memory refers to a memory system with limited capacity for temporary processing and storage of information (<xref ref-type="bibr" rid="B6">Baddeley, 2003</xref>). It plays an important role in many complex cognitive activities. Many theoretical models have attempted to explain this memory system. One that is widely held is the Baddeley&#x2019;s Working Memory Model (<xref ref-type="bibr" rid="B7">Baddeley and Hitch, 1974</xref>), which suggests that working memory consists of visuo-spatial sketchpad, phonological loop, and central executive. Later research came up with the episodic buffer, forming the four-component model of the working memory system (<xref ref-type="bibr" rid="B4">Baddeley, 2000b</xref>).</p>
<p>Auditory information may interfere with working memory in a complex task environment. For example, those with changing patterns interfere with serial recall task performance. Irrelevant sounds (e.g., the sounds do not need to be noticed) can also interfere with the current task (<xref ref-type="bibr" rid="B11">Banbury et al., 2001</xref>; <xref ref-type="bibr" rid="B50">Macken and Jones, 2003</xref>; <xref ref-type="bibr" rid="B34">Hughes et al., 2007</xref>; <xref ref-type="bibr" rid="B51">Macken et al., 2009</xref>), and this phenomenon is called &#x201C;Irrelevant Sound Effect (ISE).&#x201D; Using the ISE paradigm, researchers found that the accuracy of reports decreased by 30&#x2013;50% when unrelated narrative statements were played during a serial recall task (<xref ref-type="bibr" rid="B26">Ellermeier and Zimmer, 1997</xref>). Experimental analysis on the effect of external cockpit sounds on crew performance showed that compared with quiet or ambient aircraft noise, the presence of external background sounds substantially disrupted the memory of longitude and latitude information by up to 60% (<xref ref-type="bibr" rid="B10">Banbury and Jones, 1999</xref>). Serial recall was also hampered by various non-speech sounds, including pure tones (e.g., <xref ref-type="bibr" rid="B42">Klatte et al., 1995</xref>; <xref ref-type="bibr" rid="B55">Neath et al., 1998</xref>) and music streams (e.g., <xref ref-type="bibr" rid="B56">Nittono, 1997</xref>). Moreover, the interference of sound may be stable and difficult to be habituated (<xref ref-type="bibr" rid="B38">Jones et al., 1997</xref>; <xref ref-type="bibr" rid="B65">Tremblay and Jones, 1998</xref>), even if prolonged exposure did lead to some degree of habituation, and relatively short quiet periods could drive rapid dishabituation (<xref ref-type="bibr" rid="B9">Banbury and Berry, 1998</xref>).</p>
<p>Further research revealed that the perception and identification of learned auditory warnings can also interfere with working memory. However, learned melody and rhythm auditory warnings would interfere only when the participants attempt to identify them. By contrast, learned non-word phrases would interfere even when ignored (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>). Given their different characteristics, we speculated that various kinds of auditory warnings may interfere with working memory differently. Alarm sounds used in previous studies were either earcons (e.g., rhythm and melody) or spoken non-word phrases. The impact of auditory icons and spearcons on working memory has not been determined. Spearcon is a hybrid auditory display between speech and non-speech (<xref ref-type="bibr" rid="B36">Jeon, 2015</xref>), and it appears to have both verbal and non-verbal attributes. Researchers have found that concurrent verbal tasks had a negative impact on the identification of spearcons (<xref ref-type="bibr" rid="B21">Davidson et al., 2019</xref>), and identifying learned spearcons may interfere with speech-based working memory tasks (<xref ref-type="bibr" rid="B71">Wolters et al., 2012</xref>). However, the impact of ignoring spearcons and auditory icons on working memory has not been explored, and no research has compared the interference of different kinds of auditory warnings on working memory.</p>
</sec>
<sec id="S1.SS3">
<title>Relevant Theoretical Models: Impact of Auditory Warnings on the Different Domains of Working Memory and the Mechanism</title>
<p>It is widely accepted that working memory system is divided into verbal and spatial working memory. Most previous studies have focused on the impact of auditory warnings on verbal working memory. However, the influence of auditory warnings on spatial working memory and whether they interfere differently with the two domains deserve further exploration.</p>
<p>Due to the forced hearing nature of the sound signal, the warning sound tends to attract people&#x2019;s attention. When the alarm sounds, some operators in the workplace may need to ignore it, but it may still be distracting or interfere with working memory. In the cognitive behavioral tradition, studies on the mechanism of sound interference with working memory performance have been mainly focused on how working memory task is interfered with by unrelated sounds that change acoustically (i.e., the changing-state effect) (<xref ref-type="bibr" rid="B39">Jones et al., 1992</xref>; <xref ref-type="bibr" rid="B45">Lecompte, 1995</xref>), and the physiological and behavioral distraction effect of an auditory event that deviates in some way from the recent hearing (i.e., the deviation effect) (<xref ref-type="bibr" rid="B19">Cowan, 1995</xref>; <xref ref-type="bibr" rid="B64">Titova and N&#x00E4;&#x00E4;t&#x00E4;nen, 2001</xref>). The duplex-mechanism account holds that sound can cause unnecessary auditory distraction either by interfering specifically with the processes involved in the focal task (interference-by-process) or by diverting attention away from a focal task regardless of the type of processing involved in the task (attentional capture) (<xref ref-type="bibr" rid="B34">Hughes et al., 2007</xref>; <xref ref-type="bibr" rid="B32">Hughes, 2014</xref>). In this view, the changing-state effect can be better explained by recourse to interference-by-process, and the deviation effect may be attributed to attentional capture. In another case, and in most cases, operators may need to identify warnings and determine mentally whether they need to take corresponding actions. There may be a distraction problem both when ignoring and when identifying the warnings. Whether distraction (or switching attention) and the process of identifying warnings would affect ongoing tasks involving verbal and spatial working memory may be related to resource limitation and interference.</p>
<p>Multiple Resource Theory (MRT) proposes four important categorical and dichotomous dimensions that account for variance in time-sharing performance. Each dimension has two discrete &#x201C;levels,&#x201D; each defining a separate but limited resource. The four dimensions are processing stages (perception and cognition vs. selection and response), perceptual modalities (auditory vs. visual), visual channels (focal vs. ambient), and processing codes (spatial vs. verbal) (<xref ref-type="bibr" rid="B69">Wickens, 2002</xref>). MRT predicts that resource interference occurs when two tasks are performed using the same domain resources, and worsens the performance compared with that when using different domain resources. For example, the interference between two tasks both requiring verbal perception is greater than that between one task requiring spatial perception and the other requiring verbal perception. What is noteworthy is that regardless of doing one or two tasks, MRT is relevant only in the region where overload is imposed by multiple tasks but not in the residual capacity region. For example, it can predict the size of dual-task decrements once overload has been reached (<xref ref-type="bibr" rid="B70">Wickens, 2008</xref>).</p>
<p>Similarly, the multi-resource model of working memory also involves the domain-specific assumptions about limited resources: working memory consists of multiple domain-specific subsystems, and each subsystem has its own resource pool (e.g., <xref ref-type="bibr" rid="B8">Baddeley and Logie, 1999</xref>). The nature of resources is domain-specific, that is, specific resources support verbal or visuospatial activities. Therefore, interference occurs when the two tasks involve information belonging to the same domain, and no (or minimal) interference occurs when the tasks involve information belonging to different domains. Verbal working memory is more likely to be interfered with by verbal tasks than by spatial tasks, and spatial working memory is more susceptible to interference from spatial tasks than from verbal tasks (<xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>).</p>
<p>In addition, another assumption about limited resources is that a general limited resource pool supports various cognitive activities (e.g., <xref ref-type="bibr" rid="B25">Egeth and Kahneman, 1975</xref>; <xref ref-type="bibr" rid="B12">Barrouillet et al., 2004</xref>). This pool of resources is often called attention. Verbal and spatial activities are assumed to compete for a common pool of domain-general limited resources, resulting in interference between the two activities (<xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>). <xref ref-type="bibr" rid="B10">Banbury and Jones (1999)</xref> found that speech interfered with visuospatial task performance despite being ignored. Studies have further confirmed that verbal and spatial activities interfered with each other under dual-task conditions, indicating the existence of a domain-general resource in the mental process of verbal and spatial (<xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>; <xref ref-type="bibr" rid="B54">Morey et al., 2018</xref>). Mobile phone use impaired driving safety, regardless of whether the phone was hand-held or hands-free (<xref ref-type="bibr" rid="B63">Strayer and Johnston, 2001</xref>). This finding suggests that processing sound information interferes with spatial tasks at least to a certain extent. However, most of the concurrent tasks in previous studies were verbal tasks (e.g., speech or text). The impact of non-speech auditory displays on spatial working memory remains to be further clarified.</p>
<p>This study attempted to explore and explain the impact of auditory warnings on working memory and its mechanism based on the duplex-mechanism account and the related resource theories. Based on the review of relevant literature, how identifying and ignoring three types of auditory warnings (auditory icons, earcons, and spearcons) affects performance on a verbal serial recall task (i.e., spatial working memory), and whether there are differences among them have not been determined. We investigated these questions in Experiment 1. We hypothesized that warning identification would have more influence on recall task performance compared with warning ignoring, and different types of auditory warnings would worsen recall task performance differently. Furthermore, we further explored whether the performance of location recall task (i.e., spatial working memory) was similarly affected by the three types of auditory warnings in Experiment 2. According to related theories, we hypothesized that auditory warnings would worsen the performance of location recall task, and the three types of auditory warnings impact location recall differently. Overall, this study evaluated the impact of different types of auditory warnings on the performance of verbal and spatial working memory tasks. The findings may help to draw people&#x2019;s attention to the potential problems of using auditory warnings in related environments, especially those that require high working memory load, and may serve as a caution against the possible existence of overuse of auditory warnings in such environments. In addition, the impact of three types of auditory warnings (auditory icons, earcons, and spearcons) on working memory was investigated to provide useful guidelines for the selection and design of auditory warning signals. Finally, the differences in the interference degree of auditory warnings for verbal and spatial working memory were also analyzed.</p>
</sec>
</sec>
<sec id="S2">
<title>Experiment 1</title>
<p>Experiment 1 aimed to test the impact of different types of auditory warnings on verbal working memory.</p>
<sec id="S2.SS1">
<title>Materials and Methods</title>
<sec id="S2.SS1.SSS1">
<title>Participants</title>
<p>Seventy-two participants aged 17&#x2013;25 years (<italic>M</italic> = 19.33, <italic>SD</italic> = 2.02), including 37 females and 35 males completed the study. The number of participants was determined by using G&#x002A;Power. The statistical power (1&#x2212;&#x03B2;) is function of the type I error (&#x03B1; = 0.05), power was set to 0.80, power analysis was conducted for a medium effect size (<italic>f</italic> = 0.25). Analysis indicated that to detect a medium effect size would have required 69 participants. Considering that the number of participants required for the Latin square design of task conditions was a multiple of 6, a total of 72 participants were recruited. All participants were recruited from Zhejiang Sci-Tech University and were paid CNY20 (US&#x0024;3) as compensation for their time. The participants were randomly divided into three groups of 24 members, and all individuals reported normal visual (or corrected vision) and normal hearing.</p>
</sec>
<sec id="S2.SS1.SSS2">
<title>Apparatus and Materials</title>
<p>The experimental program was written and conducted by E-Prime version 3.0 and presented on the 13.3-inch laptop monitor. All sounds were presented through the Sennheiser HD206 stereo headset, and the volume was set at a comfortable level (approximately between 30 and 36%) for the participants. Serial recall tasks were used for the testing period, and the participants were instructed to simulate the monitoring of chemical reactions (see <xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>, for a similar recall task).</p>
<p>Auditory warnings were grouped into auditory icons, earcons, and spearcons with four warnings each. The length of warnings was between 903 and 1,078 milliseconds. The material of the auditory icons was taken from the <ext-link ext-link-type="uri" xlink:href="https://ear0.com">ear0.com</ext-link> website and used after cropping, noise reduction, and fade-in and fade-out settings. The four auditory icons were as follows: the sound of pouring and gradually filling water in a cup represented the concentration imbalance warning; the sound of a ship horn represented the warning of volume imbalance; the sound of hot water boiling represented the warning of temperature imbalance; and the sound of glass bursting represented the pressure unbalance warning.</p>
<p>Rhythm alarms used in previous studies (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>) were used for the earcons. The rhythm alarms were composed of four tones, each of the same note value, varying in length and in four different arrangements. These rhythm alarms were properly cropped and compressed without changing the pitch to keep the length within the range of 903&#x2013;1,078 milliseconds by using GoldWave 6.41.</p>
<p>Spearcons in this study were generated by compressing the TTS phrases. TTS items were linearly time-compressed to between 30 and 40% of their original length while maintaining original pitch. Eighteen volunteers were recruited to complete a questionnaire survey on the semantic recognition of spearcons, and 88.89% of the volunteers thought that the final spearcons could not be recognized as a specific speech. Therefore, we regarded that these spearcons satisfied the definition of &#x201C;the spoken phrases are sped up even to the point where they are no longer considered speech.&#x201D;</p>
</sec>
<sec id="S2.SS1.SSS3">
<title>Design</title>
<p>In Experiment 1, a 3 (auditory warning type) &#x00D7; 3 (task condition) mixed design was used. Participants were randomly assigned to one of three experimental groups: auditory icon, earcon, or spearcon groups. The participants in each group only heard the named auditory warning type in identify-warning and ignore-warning conditions. They completed the serial recall task once in the no-warning condition, once in the identify-warning condition, and once in the ignore-warning condition. The dependent variables were serial recall accuracy (the answer was recorded as correct when all eight digits were correctly recalled in the order presented) and warning identification accuracy (available only in the identify-warning condition).</p>
<sec id="S2.SS1.SSS3.Px1">
<title>No-Warning Condition</title>
<p>In this condition, the participants performed the serial recall task without any auditory warnings and were shown standard instructions on the screen prior to the test. After two practice trials, the individuals started the formal recall task and completed 24 serial recall trials. Each trial consisted of eight digits presented in a random order without repetition, and each digit appeared on screen for 800 milliseconds. The participants were required to remember all eight digits in the order of appearance. At the conclusion of the eight-digit presentation, a blank screen was shown for 2 s before the response box appeared, and the participants then recalled and entered their response in the box by tapping the keyboard. A response was scored as correct only when the eight digits were repeated correctly in the order presented. After the digits were completely inputted, the participants were cued for the beginning of the next trial.</p>
</sec>
<sec id="S2.SS1.SSS3.Px2">
<title>Identify-Warning Condition</title>
<p>In this condition, the participants conducted two practice trials and then completed 24 serial recall trials after being presented with standard instructions. The eight digits were presented in the same way as the no-warning condition; however, during such time the participants were interspersed once with one of the auditory warnings that they had learned in the learning period. The warning appeared randomly between the first and second digits, the third and fourth digits, the fifth and sixth digits, or the seventh and eighth digits. An identical blank screen of 2 s occurred at the end of the eight-digit presentation. After the participants entered their serial recall response, an identification screen appeared in which they were instructed to identify the auditory warning by pressing a specific key on the keyboard, as they had done in the learning period, and then proceed to the next trial.</p>
</sec>
<sec id="S2.SS1.SSS3.Px3">
<title>Ignore-Warning Condition</title>
<p>The ignore-warning condition was identical to the identify-warning condition except those participants were told to ignore the presented auditory warning and were not required to identify and respond to it at the end.</p>
</sec>
</sec>
<sec id="S2.SS1.SSS4">
<title>Procedure</title>
<p>Prior to the experiment, the relevant demographic information of the participants was collected, and the structural process of the experiment was briefly described. Participants read the necessary instructions presented on the screen and individually completed the experiment in a quiet laboratory. They underwent a learning session to master a set of four warnings (either auditory icons, earcons, or spearcons), which were then presented while the participants were engaged in a serial recall task in the testing period. Participants completed the auditory warnings learning period before starting the formal testing period.</p>
<sec id="S2.SS1.SSS4.Px1">
<title>Learning Period</title>
<p>Participants in each group underwent a learning period to learn an association between an auditory warning and a response. Four distinct warnings were used for each group. Participants were instructed to monitor a chemical reaction, and each individual auditory warning represented an imbalance in either the concentration, volume, temperature, or pressure of the reaction. This process aimed to avoid any association with any existing warnings that are familiar to the participants and create a generic semantic association with an arbitrary quantity (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>). In the initial phase of the learning period, each auditory warning and its related parameter were presented together three times for 1,200 ms each. Participants then underwent a testing phase in which the warnings were presented individually in random order. They were asked to identify the warning by entering a specified key of the parameter it represented (four stickers on the keys indicated the parameters of the warnings: F for concentration, J for temperature, V for volume, and N for pressure). Participants were given feedback on the accuracy of their responses. Each auditory warning was presented three times, and the individuals were considered to have learned and ended the testing when they got all 12 correct answers; otherwise, they had to repeat the testing until they reached 100% accuracy.</p>
</sec>
<sec id="S2.SS1.SSS4.Px2">
<title>Testing Period</title>
<p>Each participant completed the digital serial recall tasks once in every condition. The three task conditions were counterbalanced following the Latin square design. Participants practiced two trials before each task to ensure that they understood the procedure of the task. At the conclusion of each block of 24 trials, the participants were invited to take a short break (for approximately 2 min) before proceeding to the next phase.</p>
<p>The duration of the entire experiment was approximately 40 min.</p>
</sec>
</sec>
</sec>
<sec id="S2.SS2">
<title>Results</title>
<p>Mauchly&#x2019;s test of sphericity was examined, and the Greenhouse-Geisser correction was used where necessary. Descriptive statistics of the mean serial recall accuracy of auditory icon, earcon, and spearcon groups in different task conditions are shown in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p><italic>Experiment 1:</italic> Mean serial recall accuracy (%) in the no-warning, identify-warning, and ignore-warning conditions for the auditory icon, earcon and spearcon groups.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="center">No-warning <italic>M</italic> (<italic>SD</italic>)</td>
<td valign="top" align="center">Identify-warning <italic>M</italic> (<italic>SD</italic>)</td>
<td valign="top" align="center">Ignore-warning <italic>M</italic> (<italic>SD</italic>)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Auditory icon</td>
<td valign="top" align="center">86.46 (12.96)</td>
<td valign="top" align="center">87.15 (14.84)</td>
<td valign="top" align="center">90.80 (7.37)</td>
</tr>
<tr>
<td valign="top" align="left">Earcon</td>
<td valign="top" align="center">88.20 (8.66)</td>
<td valign="top" align="center">69.97 (22.66)</td>
<td valign="top" align="center">88.19 (10.55)</td>
</tr>
<tr>
<td valign="top" align="left">Spearcon</td>
<td valign="top" align="center">85.42 (12.59)</td>
<td valign="top" align="center">66.84 (21.01)</td>
<td valign="top" align="center">84.20 (12.41)</td>
</tr>
<tr>
<td valign="top" align="left">Total</td>
<td valign="top" align="center">86.69 (11.46)</td>
<td valign="top" align="center">74.65 (21.48)</td>
<td valign="top" align="center">87.73 (10.53)</td>
</tr>
</tbody>
</table></table-wrap>
<p>One-way ANOVA was performed on the no-warning baseline scores to ensure comparability among the three groups. The result confirmed that there was no significant difference among the three warning-type groups, <italic>F</italic> (2, 69) = 0.354, <italic>p</italic> = 0.703, partial &#x03B7;<sup>2</sup> = 0.01.</p>
<p>The results from mixed-design 3 &#x00D7; 3 factorial ANOVA showed that both task condition and warning type significantly affected the serial recall accuracy. The task condition revealed a main effect for task performance, <italic>F</italic>(2, 138) = 34.23, <italic>p</italic> &#x003C; 0.001, partial &#x03B7;<sup>2</sup> = 0.332, and the main effect of warning type was significant, <italic>F</italic> (2, 69) = 3.922, <italic>p</italic> = 0.024, partial &#x03B7;<sup>2</sup> = 0.102. In addition to the main effect, a significant two-way interaction was found between task condition and warning type (see <xref ref-type="fig" rid="F1">Figure 1</xref>), <italic>F</italic>(4, 138) = 7.092, <italic>p</italic> &#x003C; 0.001, partial &#x03B7;<sup>2</sup> = 0.171.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p><italic>Experiment 1:</italic> Mean serial recall accuracy in the no-warning, identify-warning, and ignore-warning conditions for the auditory icon, earcon and spearcon groups.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g001.tif"/>
</fig>
<p>Further simple effect analysis was conducted by using a <italic>post-hoc</italic> test with Bonferroni correction. The results revealed that for the auditory icon group, no significant difference in mean serial recall accuracy was found among the three conditions. For the earcon and spearcon groups, the mean serial recall accuracy for the identify-warning condition was significantly lower than that for the other two conditions (<italic>p</italic> &#x003C; 0.001), but no significant difference was observed between the no-warning and the ignore-warning conditions. For the identify-warning condition, the mean serial recall accuracy of the auditory icon group was significantly higher than that of the earcon and spearcon groups (<italic>p</italic> &#x003C; 0.05). For the ignore-warning condition, there was no significant difference in serial recall accuracy among the three groups.</p>
<p>The one-way ANOVA results revealed significant differences in the mean identification accuracy of the three groups, <italic>F</italic>(2, 69) = 25.311, <italic>p</italic> &#x003C; 0.001, partial &#x03B7;<sup>2</sup> = 0.423. The mean identification accuracy for auditory icons was significantly higher than that for earcons and spearcons (<italic>p</italic> &#x003C; 0.05), and that for spearcons was significantly higher than that for earcons (<italic>p</italic> &#x003C; 0.001), as shown in <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p><italic>Experiment 1:</italic> Mean warning identification accuracy in the identify-warning condition for the auditory icon, earcon and spearcon groups.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g002.tif"/>
</fig>
<p>Spearman correlation analysis was conducted on the mean serial recall accuracy and the warning identification accuracy with the learnability (related to the number of practices when reached 100% accuracy) to confirm whether the results of this experiment were related to the learnability. The fewer practices to gain 100% accuracy indicated the better learnability. No significant correlation was found between learnability and serial recall accuracy when the auditory warning required to be identified (<italic>r</italic> = 0.214, <italic>p</italic> &#x003E; 0.05) or ignored (<italic>r</italic> = &#x2212;0.015, <italic>p</italic> &#x003E; 0.05). However, a significant correlation was observed between learnability and mean warning identification accuracy (<italic>r</italic> = 0.451, <italic>p</italic> &#x003C; 0.001).</p>
</sec>
<sec id="S2.SS3">
<title>Discussion</title>
<p>The results of Experiment 1 showed that in the identify-warning condition, earcon and spearcon identification worsened the performance on the serial recall task. These results were consistent with previous studies. Identifying spearcons may interfere with verbal working memory tasks (<xref ref-type="bibr" rid="B71">Wolters et al., 2012</xref>). The perception and identification of learned earcons (rhythm) interfere with working memory (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>). However, auditory icon identification did not interfere with the performance of the serial recall task. The identification of auditory icons may require less working memory than the identification of earcons and spearcons. Participants may have been performing the identifying auditory icon warnings within their residual capacity of available resources, preserving high accuracy at serial recall tasks (<xref ref-type="bibr" rid="B70">Wickens, 2008</xref>).</p>
<p>Auditory icon warnings had the highest warning identification accuracy among the three groups. This finding may be related to the use of sounds from real, daily events in auditory icons, and the fact that these signals are strongly representative and easy to learn. Moreover, it was found in our present study that warning identification accuracy was related to the number of practices (few practices indicated high accuracy). Therefore, we speculated that warning identification accuracy may be related to its learnability. However, the impact on identification performance caused by the resource&#x2019;s competition of concurrent verbal serial recall tasks could not be ruled out. Furthermore, whether the spatial working memory is similarly affected when the participants identify or ignore the three types of auditory warnings remains unclear. These issues will be addressed in Experiment 2.</p>
</sec>
</sec>
<sec id="S3">
<title>Experiment 2</title>
<p>Experiment 2 aimed to test the impact of different types of auditory warnings on spatial working memory.</p>
<sec id="S3.SS1">
<title>Materials and Methods</title>
<sec id="S3.SS1.SSS1">
<title>Participants</title>
<p>Seventy-two participants aged 17&#x2013;25 years (<italic>M</italic> = 20.01, <italic>SD</italic> = 2.20), including 29 females and 43 males completed this study. The number of participants was determined by using G&#x002A;Power, and the parameter settings were identical to Experiment 1. Considering that the number of participants required for the Latin square design of task conditions was a multiple of 6, a total of 72 participants were recruited. All participants were recruited from Zhejiang Sci-Tech University and were paid CNY20 (US&#x0024;3) as compensation for their time. Participants were randomly divided into three groups of 24 members, and all individuals reported normal visual (or corrected vision) and normal hearing.</p>
</sec>
<sec id="S3.SS1.SSS2">
<title>Apparatus and Materials</title>
<p>The apparatus and materials in Experiment 2 were generally similar to those in Experiment 1 except for the verbal tasks being replaced with spatial tasks.</p>
</sec>
<sec id="S3.SS1.SSS3">
<title>Design</title>
<p>A 3 (auditory warning type) &#x00D7; 3 (task condition) mixed design was used in Experiment 2. The participants in each of the auditory icon, earcon, and spearcon groups only heard the named auditory warning type. They completed the red square location recall task (see <xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>, for a similar recall task) once in every condition (no-warning, identify-warning, and ignore-warning). The dependent variables were red square location recall accuracy (the answer was recorded as correct when all five red squares were correctly recalled in the order presented) and warning identification accuracy (available only in the identify-warning condition).</p>
<sec id="S3.SS1.SSS3.Px1">
<title>No-Warning Condition</title>
<p>In this condition, participants performed the red square location recall task without any auditory warnings. They were shown standard instructions on the screen before beginning the testing. Participants conducted two practice trials and then completed 24 location recall trials. Each trial consisted of a 4 &#x00D7; 4 matrix. Five red squares randomly appeared at different positions in the matrix for 800 milliseconds each (see <xref ref-type="fig" rid="F3">Figure 3</xref>). Participants were required to remember all five positions of the red squares in order. At the conclusion of the five red squares presentation, a blank screen was shown for 2 s before the response box appeared. The participants then recalled and selected correct locations in an empty 4 &#x00D7; 4 matrix by clicking the mouse. After clicking the &#x201C;submit&#x201D; button, they were cued for the beginning of the next trial.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p><italic>Experiment 2:</italic> The demonstration of red squares in location recall task (one of the random orders).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g003.tif"/>
</fig>
</sec>
<sec id="S3.SS1.SSS3.Px2">
<title>Identify-Warning Condition</title>
<p>In this condition, participants conducted two practice trials and then completed 24 location recall trials after being presented with standard instructions. The five red squares were presented in a 4 &#x00D7; 4 matrix in an identical way to the no-warning condition. However, during this time participants were interspersed with one of the auditory warnings that they had learned in the learning period, played once. The auditory warning appeared randomly between the first and second matrices, the second and third matrices, the third and fourth matrices, or the fourth and fifth matrices. A blank screen also appeared for 2 s at the end of the five-square presentation. After the location recall task was completed and submitted, an identification screen appeared in which the participants were instructed to identify the auditory warning by pressing a specific key on the keyboard and then proceed to the next trial.</p>
</sec>
<sec id="S3.SS1.SSS3.Px3">
<title>Ignore-Warning Condition</title>
<p>The ignore-warning condition was identical to the identify-warning condition except that the participants were told to ignore the presented auditory warning and were not required to identify and respond to it at the end.</p>
</sec>
</sec>
<sec id="S3.SS1.SSS4">
<title>Procedure</title>
<p>The procedure of Experiment 2 was similar to that of Experiment 1. Each participant completed the learning of their named auditory warning type before starting the formal testing stage. The duration of the entire experiment was approximately 35 min.</p>
</sec>
</sec>
<sec id="S3.SS2">
<title>Results</title>
<p>Mauchly&#x2019;s test of sphericity was examined, and the Greenhouse-Geisser correction was used where necessary. Descriptive statistics of the mean red square location recall accuracy of each group in different task conditions are shown in <xref ref-type="table" rid="T2">Table 2</xref>.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Mean location recall accuracy (%) in the no-warning, identify-warning, and ignore-warning conditions for the auditory icon, earcon and spearcon groups.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left"></td>
<td valign="top" align="center">No-warning <italic>M</italic> (<italic>SD</italic>)</td>
<td valign="top" align="center">Identify-warning <italic>M</italic> (<italic>SD</italic>)</td>
<td valign="top" align="center">Ignore-warning <italic>M</italic> (<italic>SD</italic>)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Auditory icon</td>
<td valign="top" align="center">73.78 (14.56)</td>
<td valign="top" align="center">72.40 (17.28)</td>
<td valign="top" align="center">73.96 (19.08)</td>
</tr>
<tr>
<td valign="top" align="left">Earcon</td>
<td valign="top" align="center">78.30 (15.73)</td>
<td valign="top" align="center">62.33 (22.20)</td>
<td valign="top" align="center">76.56 (15.92)</td>
</tr>
<tr>
<td valign="top" align="left">Spearcon</td>
<td valign="top" align="center">70.31 (17.26)</td>
<td valign="top" align="center">64.58 (18.06)</td>
<td valign="top" align="center">71.35 (16.95)</td>
</tr>
<tr>
<td valign="top" align="left">Total</td>
<td valign="top" align="center">74.13 (16.01)</td>
<td valign="top" align="center">66.44 (19.52)</td>
<td valign="top" align="center">73.96 (17.25)</td>
</tr>
</tbody>
</table></table-wrap>
<p>A mixed-design 3 &#x00D7; 3 factorial ANOVA was performed on the effects of task condition and auditory warning type on red square location recall accuracy. The result revealed a main effect for task condition, <italic>F</italic> (2, 138) = 11.631, <italic>p</italic> &#x003C; 0.001, partial &#x03B7;<sup>2</sup> = 0.144. In addition to the main effect, a significant two-way interaction was found between task condition and auditory warning type (see <xref ref-type="fig" rid="F4">Figure 4</xref>), <italic>F</italic>(4, 138) = 3.302, <italic>p</italic> = 0.016, partial &#x03B7;<sup>2</sup> = 0.087.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p><italic>Experiment 2:</italic> Mean location recall accuracy in the no-warning, identify-warning, and ignore-warning conditions for the auditory icon, earcon, and spearcon groups.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g004.tif"/>
</fig>
<p>Further simple effect analysis was conducted by using a <italic>post hoc</italic>-test with Bonferroni correction. The results indicated that for the auditory icon and spearcon groups, the mean location recall accuracy did not differ across the three task conditions. However, for the earcon group, the mean location recall accuracy in the identify-warning condition was significantly lower than that in the other two conditions (<italic>p</italic> &#x003C; 0.001). For the identify-warning condition, the mean location recall accuracy was in the order: auditory icon group &#x003E; spearcon group &#x003E; earcon group; however, the differences were not significant. For the ignore-warning condition, the mean location recall accuracy for the three groups was not significantly different either.</p>
<p>One-way ANOVA results indicated significant differences in the mean identification accuracy of the three groups, <italic>F</italic> (2, 69) = 35.701, <italic>p</italic> &#x003C; 0.001, partial &#x03B7;<sup>2</sup> = 0.509. The identification accuracy of earcons was significantly lower than those of auditory icons and spearcons (<italic>p</italic> &#x003C; 0.001), but no significant difference was observed between auditory icons and spearcons as shown in <xref ref-type="fig" rid="F5">Figure 5</xref>.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p><italic>Experiment 2:</italic> Mean warning identification accuracy in the identify-warning condition for the auditory icon, earcon, and spearcon groups.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g005.tif"/>
</fig>
<p>Spearman correlation analysis was conducted to further confirm whether the results were related to the learnability of auditory warnings. No significant correlation was found between learnability and mean location recall accuracy when the warnings were identified (<italic>r</italic> = 0.141, <italic>p</italic> &#x003E; 0.05) or ignored (<italic>r</italic> = &#x2212;0.082, <italic>p</italic> &#x003E; 0.05). However, the mean warning identification accuracy was found to be significantly correlated with the learnability (<italic>r</italic> = 0.559, <italic>p</italic> &#x003C; 0.001), which was consistent with the finding in Experiment 1.</p>
</sec>
<sec id="S3.SS3">
<title>Discussion</title>
<p>Experiment 2 results showed that auditory icon identification had no significant interference on location recall task and had the highest identification accuracy among the three types of warnings. By contrast, earcon identification had significant interference on location recall task and the lowest identification accuracy. These results were consistent with experiment 1. The identification of earcons may require more working memory than that of auditory icons and spearcons. Participants may not be able to identify the earcons within their residual capacity of available resources, thereby leading to a competition with the location recall tasks for limited resources and resulting in low accuracy for the latter (<xref ref-type="bibr" rid="B70">Wickens, 2008</xref>).</p>
<p>The results of warning identification accuracy in Experiment 2 were consistent with those in Experiment 1. The identification accuracy was the highest for auditory icons, followed by spearcons and earcons. The results of both experiments were combined for a rough comparison. The findings showed that the overall performance of warning identification in the concurrent verbal task was worse than that in the concurrent spatial task, especially for the spearcons (see <xref ref-type="fig" rid="F6">Figure 6</xref>). In the concurrent verbal task, the identification accuracy of auditory icons was significantly higher than that of spearcons. Meanwhile, no significant difference between auditory icons and spearcons was found in the concurrent spatial task. The correlation between learnability and warning identification performance for verbal tasks was lower than that for spatial tasks. This indicated that the accuracy of warning identification was weakly affected by learnability but greatly influenced by the concurrent verbal task, compared with that in the concurrent spatial task. Recent studies found that concurrent verbal tasks would reduce the ability of participants to identify the spearcons (<xref ref-type="bibr" rid="B21">Davidson et al., 2019</xref>), which is consistent with the present results. Furthermore, the impact of identifying warnings on working memory was roughly analyzed by comparing the recall accuracy difference in the ignore-warning and identify-warning conditions. Although identifying auditory warnings (e.g., earcons) may interfere with spatial tasks, they consistently had a greater impact on verbal tasks, especially spearcons (see <xref ref-type="fig" rid="F7">Figure 7</xref>). Therefore, we further speculated that warning identification had a greater impact on the overall performance of the verbal working memory task than that of the spatial working memory task. Meanwhile, the verbal working memory task had a greater impact on the overall performance of warning identification than the spatial working memory task. The greatest variation in spearcons might be related to their speech features. Early studies suggested that non-speech sounds did not interfere with working memory (<xref ref-type="bibr" rid="B60">Salam&#x00E9; and Baddeley, 1987</xref>), though subsequent work revealed that this depended on other factors of auditory information.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Mean warning identification accuracy for the auditory icon, earcon, and spearcon groups in the concurrent verbal and spatial working memory task.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g006.tif"/>
</fig>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Mean recall accuracy difference (<italic>M</italic><sub>acc for ignore&#x2013;warning</sub> - <italic>M</italic><sub>acc for identify&#x2013;warning)</sub> in the ignore-warning and identify-warning conditions for the auditory icon, earcon, and spearcon groups in the verbal and spatial working memory task.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fpsyg-13-780657-g007.tif"/>
</fig>
<p>However, it is important to acknowledge that some of the comparisons and discussions here are made across two experiments. Further verification is needed, due to the possible existence of the failure of random assignment and the influence of uncontrolled changes.</p>
</sec>
</sec>
<sec id="S4">
<title>General Discussion</title>
<sec id="S4.SS1">
<title>Different Impacts of Three Types of Auditory Warnings on Working Memory</title>
<p>The impact of different types of auditory warnings on verbal and spatial working memory was examined. Current findings showed that identifying auditory icon warnings did not interfere with verbal and spatial working memory; however, identifying earcon warnings worsened participants&#x2019; performance on both verbal and spatial recall tasks, whereas identifying spearcon warnings only affects verbal recall tasks. These results showed that identifying different types of auditory warnings has different effects on verbal and spatial working memory.</p>
<p>Auditory icon warnings did not interfere with verbal and spatial working memory, either when the warnings were ignored or identified. Several usability studies indicated that auditory icons have better intuitiveness, learnability, and memorability than earcons (<xref ref-type="bibr" rid="B27">Garzonis et al., 2009</xref>; <xref ref-type="bibr" rid="B35">Isherwood and McKeown, 2017</xref>; <xref ref-type="bibr" rid="B1">Amer and Johnson, 2018</xref>). Our current work also indicated that auditory icons had good learnability. Participants may have completed a series of actions (switching attention, analyzing the acoustical input and then mapping the sound onto the linguistic token) with fewer working memory resources. Hence, the auditory icons did not interfere with the performance of working memory tasks because the identification of auditory icons may have placed less demand on the participants&#x2019; working memory than that of earcons and spearcons. The participants may have identified the auditory icon warnings within their residual capacity of available resources, thus preserving high accuracy at recall tasks (<xref ref-type="bibr" rid="B70">Wickens, 2008</xref>).</p>
<p>Identifying earcon warnings significantly affected the performance of both verbal and spatial working memory tasks. Earcons are synthetic sounds and are not directly related to the objects, events, or concepts they represent (<xref ref-type="bibr" rid="B16">Bonebright and Nees, 2007</xref>; <xref ref-type="bibr" rid="B49">Ludovico and Presti, 2016</xref>). The abstractions caused by the lack of semantic connection between earcons and representational events increase the difficulty of users&#x2019; understanding and memory. The participants&#x2019; efforts to remember the earcon warnings or to map the earcon sounds to the warning semantics took up additional resources, resulting in interference on working memory. Furthermore, the discovered effects of earcon identification on spatial working memory tasks supported the existence of a domain common resource in the mental process of verbal and visual space (<xref ref-type="bibr" rid="B25">Egeth and Kahneman, 1975</xref>; <xref ref-type="bibr" rid="B12">Barrouillet et al., 2004</xref>). The present study verified that verbal and visual spatial activities share a common general domain resource pool to a certain extent. The generality of these results was strengthened by other findings. For example, visual recall performance (memory for colored disks) was interfered with by simultaneous non-visual activities, such as tone-pitch recognition (<xref ref-type="bibr" rid="B62">Stevanovski and Jolicoeur, 2007</xref>). Increasing the cognitive load of concurrent spatial processing tasks reduced the performance of verbal recall tasks (<xref ref-type="bibr" rid="B59">Portrat et al., 2009</xref>; <xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>).</p>
<p>Identifying spearcon warnings only interfered with the verbal working memory. Given the worse accuracy for spatial recall task in no-warning or ignore-warning than for verbal recall task, the overall difficulty or the resources required to complete it for the location recall task might be higher than those for the serial recall task. However, the results showed that identifying spearcon warnings only affected the performance of the serial recall task, but not that for the location recall task. This finding indicated that the general resources occupied by spearcon identification were insufficient to seriously impair spatial task performance, but spearcon identification might have caused a significant domain-specific interference on the verbal working memory. Given that spearcons are a hybrid auditory display between speech and non-speech (<xref ref-type="bibr" rid="B36">Jeon, 2015</xref>), this phenomenon may be related to their speech characteristics. These results are consistent with the prediction of the working memory model theory (<xref ref-type="bibr" rid="B8">Baddeley and Logie, 1999</xref>). When combined with verbal activities, the performance of verbal memory task is worse than that of non-verbal memory task (<xref ref-type="bibr" rid="B48">Logie et al., 1990</xref>; <xref ref-type="bibr" rid="B53">Meiser and Klauer, 1999</xref>; <xref ref-type="bibr" rid="B13">Bayliss et al., 2003</xref>). One possible reason is that in the pronunciation control part of the phonological loop, the participants might &#x201C;convert&#x201D; the digits of the text form into speech codes through subvocalization and make them access the phonological storage device during the presentation of the digital stimulus items, thus causing interference.</p>
<p>The above findings are also consistent with the multi-resource theory (<xref ref-type="bibr" rid="B69">Wickens, 2002</xref>). The perception modality of concurrent tasks differed. The spatial location recall task used the visual modality. In the serial recall task, the participants might memorize the digits by articulatory rehearsal, and the visual and auditory modalities might be employed. Thus, the serial recall task competed with the identification of auditory warnings for the same limited pool of auditory modality resources (<xref ref-type="bibr" rid="B69">Wickens, 2002</xref>). Furthermore, the serial recall task required an additional stage of phonological processing to convert visual text into vocalized speech, thereby increasing verbal working memory load. Meanwhile, the identification of auditory warnings which involved mentally mapping sounds to specific linguistic tags, also required a verbal working memory load. Hence, the two tasks were competing for the same resources, thus reducing the resources available for simultaneous processing of serial recall task. The domain-specific interference of identifying auditory warnings in the task of verbal working memory may also be explained by some research findings. <xref ref-type="bibr" rid="B20">Cowan and Morey (2007)</xref> found that the domain-specific effect of working memory was more significant in the encoding stage than in the maintenance stage. It was possible that verbal tasks interfered with both encoding and maintenance of verbal information, and visuospatial tasks interfered only with the maintenance. Additionally, verbal information was found to be maintained by two independent mechanisms: attentional refreshing and articulatory rehearsal (<xref ref-type="bibr" rid="B31">Hudjetz and Oberauer, 2007</xref>; <xref ref-type="bibr" rid="B18">Camos et al., 2009</xref>). <xref ref-type="bibr" rid="B66">Vergauwe et al. (2010)</xref> suggested that this phenomenon occurred because both mechanisms were interfered with by verbal processing, whereas the spatial task interfered only with attentional refreshing. Nevertheless, further research is needed for verification.</p>
</sec>
<sec id="S4.SS2">
<title>Are Auditory Warnings Ignorable?</title>
<p>An interesting finding here was that for the three types of warnings, no interference was observed when they were ignored by participants. This finding seems to be inconsistent with the irrelevant sound effect (<xref ref-type="bibr" rid="B50">Macken and Jones, 2003</xref>; <xref ref-type="bibr" rid="B34">Hughes et al., 2007</xref>; <xref ref-type="bibr" rid="B51">Macken et al., 2009</xref>). However, recent research showed that verbal recall tasks were only disrupted by irrelevant speech, but not by the presence of music or noise. The findings may be explained by a functional dissociation between working memory for phonological and non-phonological auditory items (<xref ref-type="bibr" rid="B41">Kattner and Meinhardt, 2020</xref>). The researchers found that for melodic and rhythmic alarms, the interference on verbal tasks was observed only when the alarm was identified (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>). This may be related to attentional capture. Participants have to divide their attention (or switch attention) between the two tasks when identifying an auditory warning, which together with the process of identifying the warning takes up limited general resources and leads to interference. In turn, there may be no such processes when the warning was ignored, and therefore the recall task performance stayed untouched. <xref ref-type="bibr" rid="B43">Lacherez et al. (2016)</xref> found that when the auditory warning was ignored, melodic and rhythmic warnings did not affect the recall task, while a spoken non-word phrase warning did. These findings indicated that the effect in the ignore-warning condition may be related to the warning type. Therefore, the current results cannot be completely attributed to attention capture and may need to be interpreted from a deeper resource perspective. In fact, attention may be regarded as a general-purpose pool of limited resources (<xref ref-type="bibr" rid="B66">Vergauwe et al., 2010</xref>). Thus, it seems that the impact of warning on working memory may ultimately be due to resource occupancy and interference; once the occupied resources reach the threshold, the performance of concurrent tasks may be affected (<xref ref-type="bibr" rid="B70">Wickens, 2008</xref>). Ignoring the three types of warnings did not affect working memory, which may be due to the resources occupied by the action of ignoring warnings did not reach the interference threshold. Additionally, previous studies have found that music containing many rhythms or pitch variations is more disturbing than that with many legato passages (<xref ref-type="bibr" rid="B42">Klatte et al., 1995</xref>). This finding was consistent with the changing state hypothesis (<xref ref-type="bibr" rid="B39">Jones et al., 1992</xref>; <xref ref-type="bibr" rid="B45">Lecompte, 1995</xref>), which indicates that speech affects the performance of working memory tasks mainly because irrelevant sound stimuli are altered before and after stimulus entities. <xref ref-type="bibr" rid="B11">Banbury et al. (2001)</xref> suggested that acoustical changes are the main cause of interference, which is adjusted by the sensory organization of sound (e.g., flow). Repeated sounds, tones, or speech would not cause interference (<xref ref-type="bibr" rid="B11">Banbury et al., 2001</xref>). The duplex-mechanism account holds that changing-state stimuli do not capture attention; rather, the pre-attentive and obligatory processing of the order of the changing stimuli (warning sounds) conflicts with the serial rehearsal of the to-be-remembered stimuli (serial recall tasks) (e.g., <xref ref-type="bibr" rid="B40">Jones et al., 1996</xref>; <xref ref-type="bibr" rid="B33">Hughes and Jones, 2005</xref>). The current results might be related to the length of the warning sound materials. In this work, the length of sounds was approximately 1 s, which was relatively short and has small acoustic variability. The cue generated by the changing-state stimuli to order that conflicts with the processing of order in the concurrent task was less. A longer sound can have more acoustic changes. Future research should use different lengths of warning sounds to determine whether the impact of irrelevant warning sounds (i.e., ignored warnings) on working memory is related to the duration of the sounds.</p>
</sec>
<sec id="S4.SS3">
<title>Practical Implications</title>
<p>The results of the present study provide important preliminary evidence that the perception and identification of learned auditory warnings (earcons and spearcons) interfere with working memory, at least in the laboratory task. However, we have to recognize that this property of capturing attention of warnings and the potential to interfere with processing represents the flip-side of the property of auditory warnings, which is often held to be their greatest asset (<xref ref-type="bibr" rid="B47">Ljungberg and Parmentier, 2012</xref>). One might think that people want auditory warnings to break in on other tasks. Nevertheless, our present research demonstrated that there may be additional costs for individuals who need to hold information in their memory. Listeners may not realize that listening to and identifying warnings may cause them to forget or ignore details that might be important to their current work. Therefore, people in work environments that use multiple auditory warnings should consider the mental load required in the execution of duties and how these might be affected by such distractions. The findings suggest that people might need to be reminded to pay attention not only to the effectiveness of auditory warnings but also to their potential impacts when designing auditory warnings, especially given the possible overuse of auditory warnings in high workload working environments.</p>
<p>The three types of auditory warnings did not interfere with working memory when ignored by the participants. This news appears encouraging because it suggests that learned warning sounds are at least negligible when the listener is informed to ignore them. Only the effort of identification causes the interference. Familiarity with warnings does not lead to involuntary or compulsory processing, or the resource occupancy generated when ignoring the warning does not reach the threshold of interfering with concurrent tasks. The operators engaged in a high-priority task may be able to prioritize their work over warning identification when they are willing to disregard the auditory warnings. Alternatively, operators can set the priority of their work to be higher than that of identifying auditory warnings, thereby reducing the potential problems of auditory warnings to some extent.</p>
<p>The identification of auditory icon warnings did not interfere with either verbal or spatial working memory and had the highest identification accuracy among the three types of warnings. Extensive work on the development of new auditory warnings for the medical device safety standard IEC 60601-1-8 demonstrates in many different ways (audibility, learnability, localizability, etc.) that auditory icons work well as auditory warnings in simulated clinical settings (<xref ref-type="bibr" rid="B23">Edworthy et al., 2017</xref>, <xref ref-type="bibr" rid="B24">2018</xref>; <xref ref-type="bibr" rid="B14">Bennett et al., 2019</xref>). They found that anesthesia providers more correctly and quickly identified auditory icon alarms than standard earcon alarms, and participants were more likely to perceive lower fatigue and task load when using auditory icon alarms (<xref ref-type="bibr" rid="B52">McNeer et al., 2018</xref>). Therefore, considering the potential impact of identifying auditory warnings on working memory, auditory icon warnings may be a good choice for auditory warnings.</p>
<p>It is worth noting that identifying earcon warnings had the largest interference on working memory and the lowest identification accuracy among the three groups. The relationship between earcon and meaning is not based on environmental experience. Users need to learn how earcons relate to events or concepts (<xref ref-type="bibr" rid="B1">Amer and Johnson, 2018</xref>). Studies have found that earcons are inferior to spearcons in terms of learnability and identification accuracy (<xref ref-type="bibr" rid="B57">Palladino and Walker, 2007</xref>; <xref ref-type="bibr" rid="B22">Dingler et al., 2008</xref>; <xref ref-type="bibr" rid="B67">Walker et al., 2013</xref>) and have worse intuitiveness, learnability, and memorability than auditory icons (<xref ref-type="bibr" rid="B27">Garzonis et al., 2009</xref>; <xref ref-type="bibr" rid="B35">Isherwood and McKeown, 2017</xref>; <xref ref-type="bibr" rid="B1">Amer and Johnson, 2018</xref>). Therefore, it might be necessary to avoid the use of earcons as auditory warning signals, especially in high-load environments.</p>
<p>Identifying spearcon warnings interfered with verbal working memory, but not with spatial working memory. Therefore, spearcons may be an appropriate choice for warning signals in environments involving spatial working memory tasks. However, given the domain-specific interference of identifying spearcons on verbal working memory tasks, it may be necessary to avoid using spearcons as warning signals in environments involving verbal tasks. Although many other factors must be considered, the current results provide useful guidelines for the selection and design of auditory warnings.</p>
</sec>
<sec id="S4.SS4">
<title>Limitations and Further Research</title>
<p>Many processes are involved in warning identification. Before identifying the presented warning, participants need to capture the entire warning sequence in their working memory and possibly need to mentally replay this warning to instill it in their memory. Some of the issues mentioned by <xref ref-type="bibr" rid="B43">Lacherez et al. (2016)</xref> were in agreement with the present study. The observed interference could be caused by auditory or phonological interference, or by analyzing the acoustical input (decoding the sound) and mapping the sound to linguistic tags (warning name). In the response selection, the participants were asked to identify the warning and press a specific key. This response might have affected the recall performance of the next trial (<xref ref-type="bibr" rid="B43">Lacherez et al., 2016</xref>). According to the theory of working memory model (<xref ref-type="bibr" rid="B5">Baddeley, 2000a</xref>), interference occurs at the encoding stage, that is, during item presentation rather than at the maintenance stage. Sounds affect the information storage in the phonological storage device. Visual stimuli (memorization items) are rehearsed into the form of phonemes and stored in the phonological storage device. Auditory phonemes that are automatically entered into the phonological storage device are confused with those converted from visual stimuli, thus resulting in interference. However, the object-oriented episodic record model (<xref ref-type="bibr" rid="B37">Jones and Macken, 1995</xref>) emphasizes that sounds weaken the performance of series recall by destroying sequential information and series rehearsal processing. It holds that sounds can cause interference in both the presentation and maintenance stages of memorization items. Therefore, many questions remain concerning the precise locus of interference. In our ongoing work, we consider the above theoretical hypothesis, and systematically manipulate the timing of warning sounds within relevant research paradigms to further elucidate the location of interference.</p>
<p>Novelty sounds (often called &#x201C;deviant sounds&#x201D;) capture attention, and the capturing of attention is a property of auditory warnings. In the present study, participants heard a set of four learned warning sounds in each condition, and the sounds were presented in random order. A learned auditory warning, once associated with a piece of information, may be more difficult to ignore than a seemingly random pattern; however, the participants&#x2019; repeated exposure to the warning sounds in the current study may reduce their elements of surprise, making the sounds easy to ignore and reducing the deviation effect (<xref ref-type="bibr" rid="B19">Cowan, 1995</xref>; <xref ref-type="bibr" rid="B64">Titova and N&#x00E4;&#x00E4;t&#x00E4;nen, 2001</xref>), and therefore performance stays unaffected. Nevertheless, whether the warning sounds in our experiments are easy to ignore might require further verification. Future research could take the role of attention capture into account and use deviant sound as an auditory warning to ensure the surprising attribute of warning sounds, and to further verify whether various types of warning sounds are ignorable.</p>
<p>The present research has some other limitations. First, the comparison focused on the impact of three types of auditory warnings (auditory icons, earcons, and spearcons) on working memory. In practical application, auditory warnings may consist of linguistic sounds with semantics, which may cause some high-order interference but might be easily recognized. Spearcons are a compromise between non-speech stimuli and full speech stimuli (<xref ref-type="bibr" rid="B71">Wolters et al., 2012</xref>). They may increase the amount of processing required compared to full speech auditory warnings that are more easily recognized. To comprehensively clarify the effects of various auditory warnings on working memory, future work should employ an identification warnings task, triggered by semantically related full speech, to determine whether speech warnings create similar interference to spearcons in terms of their effects on verbal and spatial recall tasks.</p>
<p>Second, it is important to acknowledge that we discuss the difference in the interference of auditory warnings for verbal and spatial working memory using data obtained across experiments. As mentioned in the discussion of Experiment 2, further verification is required due to the possible failure of random assignment and the influence of some uncontrollable factors. The current results are insufficient to conclude that auditory warnings create more interference with verbal working memory than with spatial working memory. The impact of auditory warnings on verbal vs. spatial recall tasks should be compared within one experiment in future work.</p>
<p>Third, the participants recruited in this study were college students. Given that working memory ability is related to age, and the age groups of operators in working environments may be various. The results should be further verified in other age groups in future studies.</p>
</sec>
</sec>
<sec id="S5" sec-type="conclusion">
<title>Conclusion</title>
<p>The purpose of this research was to investigate the impact of different types of auditory warnings on the performance of recall tasks involving verbal and spatial working memory. The results indicated that identifying auditory icon warnings did not interfere with either verbal or spatial recall tasks; however, identifying earcon warnings worsened participants&#x2019; performance on both verbal and spatial recall tasks, and identifying spearcons affected verbal recall tasks. These findings could raise concerns about the potential problems of using auditory warnings in working environments and provide useful guidelines for the selection and design of auditory warning signals. Further research is required to address the limitations of the present study, to elucidate the location of interference, and add the attributes of capturing attention and warning types to make warning sounds more ecologically valid, as well as to extend the comparative investigation to a more comprehensive scope.</p>
</sec>
<sec id="S6" sec-type="data-availability">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S7">
<title>Ethics Statement</title>
<p>The studies involving human participants were reviewed and approved by the Institutional Review Boards of Zhejiang Sci-Tech University. The participants provided their written informed consent to participate in this study.</p>
</sec>
<sec id="S8">
<title>Author Contributions</title>
<p>ZY and SM contributed to conception and design of the experiments. ZL recruited the participants and conducted the experiments. ZL and ZY performed the statistical analysis and wrote the manuscript. SM, ZY, and HL supervised the whole study. All authors contributed to manuscript revision, discussion, and approved the submitted version.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="pudiscl1" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<sec id="S9" sec-type="funding-information">
<title>Funding</title>
<p>This work was supported by the National Natural Science Foundation of China under Grant (31900768, T2192930, and T2192031) and Youth Innovation Special Project of Basic Scientific Research Foundation of Zhejiang Sci-Tech University under Grant (2020Q046).</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amer</surname> <given-names>T. S.</given-names></name> <name><surname>Johnson</surname> <given-names>T. L.</given-names></name></person-group> (<year>2018</year>). <article-title>Earcons versus auditory icons in communicating computing events: learning and user preference.</article-title> <source><italic>Int. J. Technol. Hum. Interact.</italic></source> <volume>14</volume> <fpage>95</fpage>&#x2013;<lpage>109</lpage>. <pub-id pub-id-type="doi">10.4018/IJTHI.2018100106</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Amer</surname> <given-names>T. S.</given-names></name> <name><surname>Johnson</surname> <given-names>T. L.</given-names></name> <name><surname>Maris</surname> <given-names>J. M. B.</given-names></name> <name><surname>Neal</surname> <given-names>G. L.</given-names></name></person-group> (<year>2013</year>). <article-title>The perceived hazard of earcons in information technology exception messages: the effect of musical dissonance.</article-title> <source><italic>Interact. Comput.</italic></source> <volume>25</volume> <fpage>48</fpage>&#x2013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1093/iwc/iws005</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ashburn</surname> <given-names>A.</given-names></name> <name><surname>Stack</surname> <given-names>E.</given-names></name> <name><surname>Pickering</surname> <given-names>R. M.</given-names></name> <name><surname>Ward</surname> <given-names>C. D.</given-names></name></person-group> (<year>2001</year>). <article-title>A community-dwelling sample of people with Parkinson&#x2019;s disease: characteristics of fallers and non-fallers.</article-title> <source><italic>Age Ageing</italic></source> <volume>30</volume> <fpage>47</fpage>&#x2013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1093/ageing/30.1.47</pub-id> <pub-id pub-id-type="pmid">11322672</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A.</given-names></name></person-group> (<year>2000b</year>). <article-title>The phonological loop and the irrelevant speech effect: some comments on Neath (2000).</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>7</volume> <fpage>544</fpage>&#x2013;<lpage>549</lpage>. <pub-id pub-id-type="doi">10.3758/BF03214369</pub-id> <pub-id pub-id-type="pmid">11082863</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A.</given-names></name></person-group> (<year>2000a</year>). <article-title>The episodic buffer: a new component of working memory?</article-title> <source><italic>Trends Cogn. Sci.</italic></source> <volume>4</volume> <fpage>417</fpage>&#x2013;<lpage>423</lpage>. <pub-id pub-id-type="doi">10.1016/S1364-6613(00)01538-2</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A.</given-names></name></person-group> (<year>2003</year>). <article-title>Working memory: looking back and looking forward.</article-title> <source><italic>Nat. Rev. Neurosci.</italic></source> <volume>4</volume> <fpage>829</fpage>&#x2013;<lpage>839</lpage>. <pub-id pub-id-type="doi">10.1038/nrn1201</pub-id> <pub-id pub-id-type="pmid">14523382</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A. D.</given-names></name> <name><surname>Hitch</surname> <given-names>G.</given-names></name></person-group> (<year>1974</year>). &#x201C;<article-title>Working memory</article-title>,&#x201D; in <source><italic>The Psychology of Learning and Motivation</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Bower</surname> <given-names>G. H.</given-names></name></person-group> (<publisher-loc>London</publisher-loc>: <publisher-name>Academic Press</publisher-name>), <fpage>47</fpage>&#x2013;<lpage>89</lpage>.</citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Baddeley</surname> <given-names>A. D.</given-names></name> <name><surname>Logie</surname> <given-names>R. H.</given-names></name></person-group> (<year>1999</year>). &#x201C;<article-title>Working memory: the multiple-component model</article-title>,&#x201D; in <source><italic>Models of Working Memory: Mechanisms of Active Maintenance and Executive Control</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Miyake</surname> <given-names>A.</given-names></name> <name><surname>Shah</surname> <given-names>P.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>), <fpage>28</fpage>&#x2013;<lpage>61</lpage>.</citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Banbury</surname> <given-names>S.</given-names></name> <name><surname>Berry</surname> <given-names>D. C.</given-names></name></person-group> (<year>1998</year>). <article-title>Disruption of office-related tasks by speech and office noise.</article-title> <source><italic>Br. J. Psychol.</italic></source> <volume>89</volume> <fpage>499</fpage>&#x2013;<lpage>517</lpage>. <pub-id pub-id-type="doi">10.1111/j.2044-8295.1998.tb02699.x</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Banbury</surname> <given-names>S. P.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>1999</year>). &#x201C;<article-title>&#x2018;Irrelevant sound effect&#x2019;: the effects of ex-traneous sounds on aircrew performance</article-title>,&#x201D; in <source><italic>Transportation Systems, Medical Ergonomics and Training</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Harris</surname> <given-names>D.</given-names></name></person-group> (<publisher-loc>Aldershot</publisher-loc>: <publisher-name>Ashgate Press</publisher-name>), <fpage>199</fpage>&#x2013;<lpage>206</lpage>.</citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Banbury</surname> <given-names>S. R.</given-names></name> <name><surname>Macken</surname> <given-names>W. J.</given-names></name> <name><surname>Tremblay</surname> <given-names>S.</given-names></name></person-group> (<year>2001</year>). <article-title>Auditory distraction and short-term memory: phenomena and practical implications.</article-title> <source><italic>Hum. Factors</italic></source> <volume>43</volume> <fpage>12</fpage>&#x2013;<lpage>29</lpage>. <pub-id pub-id-type="doi">10.1518/001872001775992462</pub-id> <pub-id pub-id-type="pmid">11474757</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barrouillet</surname> <given-names>P.</given-names></name> <name><surname>Bernardin</surname> <given-names>S.</given-names></name> <name><surname>Camos</surname> <given-names>V.</given-names></name></person-group> (<year>2004</year>). <article-title>Time constraints and resource sharing in adults&#x2019; working memory spans.</article-title> <source><italic>J. Exp. Psychol. Gen.</italic></source> <volume>133</volume> <fpage>83</fpage>&#x2013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1037/0096-3445.133.1.83</pub-id> <pub-id pub-id-type="pmid">14979753</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bayliss</surname> <given-names>D. M.</given-names></name> <name><surname>Jarrold</surname> <given-names>C.</given-names></name> <name><surname>Gunn</surname> <given-names>D. M.</given-names></name> <name><surname>Baddeley</surname> <given-names>A. D.</given-names></name></person-group> (<year>2003</year>). <article-title>The complexities of complex span: explaining individual differences in working memory in children and adults.</article-title> <source><italic>J. Exp. Psychol. Gen.</italic></source> <volume>132</volume> <fpage>71</fpage>&#x2013;<lpage>92</lpage>. <pub-id pub-id-type="doi">10.1037/0096-3445.132.1.71</pub-id> <pub-id pub-id-type="pmid">12656298</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bennett</surname> <given-names>C.</given-names></name> <name><surname>Dudaryk</surname> <given-names>R.</given-names></name> <name><surname>Crenshaw</surname> <given-names>N.</given-names></name> <name><surname>Edworthy</surname> <given-names>J.</given-names></name> <name><surname>McNeer</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Recommendation of new medical alarms based on audibility, identifiability, and detectability in a randomized, simulation-based study.</article-title> <source><italic>Crit. Care Med.</italic></source> <volume>47</volume> <fpage>1050</fpage>&#x2013;<lpage>1057</lpage>. <pub-id pub-id-type="doi">10.1097/CCM.0000000000003802</pub-id> <pub-id pub-id-type="pmid">31135498</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blattner</surname> <given-names>M.</given-names></name> <name><surname>Sumikawa</surname> <given-names>D.</given-names></name> <name><surname>Greenberg</surname> <given-names>R.</given-names></name></person-group> (<year>1989</year>). <article-title>Earcons and icons: their structure and common design principles.</article-title> <source><italic>ACM Sigchi Bull.</italic></source> <volume>21</volume> <fpage>123</fpage>&#x2013;<lpage>124</lpage>. <pub-id pub-id-type="doi">10.1145/67880.1046599</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonebright</surname> <given-names>T. L.</given-names></name> <name><surname>Nees</surname> <given-names>M. A.</given-names></name></person-group> (<year>2007</year>). &#x201C;<article-title>Memory for auditory icons and earcons with localization cues</article-title>,&#x201D; in <source><italic>Proceedings of the 13th International Conference on Auditory Display</italic></source>, <publisher-loc>Montr&#x00E9;al, QC</publisher-loc>, <fpage>419</fpage>&#x2013;<lpage>422</lpage>.</citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brewster</surname> <given-names>S.</given-names></name> <name><surname>Wright</surname> <given-names>P.</given-names></name> <name><surname>Edwards</surname> <given-names>A.</given-names></name></person-group> (<year>1993</year>). &#x201C;<article-title>An evaluation of earcons for use in auditory _human-computer interfaces</article-title>,&#x201D; in <source><italic>Proceedings of the CHI&#x2019;93</italic></source>, <publisher-loc>Amsterdam</publisher-loc>, <fpage>222</fpage>&#x2013;<lpage>227</lpage>. <pub-id pub-id-type="doi">10.1145/169059.169179</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Camos</surname> <given-names>V.</given-names></name> <name><surname>Lagner</surname> <given-names>P.</given-names></name> <name><surname>Barrouillet</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>Two maintenance mechanisms of verbal information in working memory.</article-title> <source><italic>J. Mem. Lang.</italic></source> <volume>61</volume> <fpage>457</fpage>&#x2013;<lpage>469</lpage>. <pub-id pub-id-type="doi">10.1016/j.jml.2009.06.002</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cowan</surname> <given-names>N.</given-names></name></person-group> (<year>1995</year>). <source><italic>Attention and Memory: An Integrated Framework.</italic></source> <publisher-loc>New York, NY, NY</publisher-loc>: <publisher-name>Oxford University Press</publisher-name>.</citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cowan</surname> <given-names>N.</given-names></name> <name><surname>Morey</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <article-title>How can dual-task working memory retention limits be investigated?</article-title> <source><italic>Psychol. Sci.</italic></source> <volume>18</volume>, <fpage>686</fpage>&#x2013;<lpage>688</lpage>. <pub-id pub-id-type="doi">10.1111/j.1467-9280.2007.01960.x</pub-id> <pub-id pub-id-type="pmid">17680938</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davidson</surname> <given-names>T.</given-names></name> <name><surname>Ryu</surname> <given-names>Y. J.</given-names></name> <name><surname>Brecknell</surname> <given-names>B.</given-names></name> <name><surname>Loeb</surname> <given-names>R.</given-names></name> <name><surname>Sanderson</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>The impact of concurrent linguistic tasks on participants&#x2019; identification of spearcons.</article-title> <source><italic>Appl. Ergon.</italic></source> <volume>81</volume>:<issue>102895</issue>. <pub-id pub-id-type="doi">10.1016/j.apergo.2019.102895</pub-id> <pub-id pub-id-type="pmid">31422275</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dingler</surname> <given-names>T.</given-names></name> <name><surname>Lindsay</surname> <given-names>J.</given-names></name> <name><surname>Walker</surname> <given-names>B. N.</given-names></name></person-group> (<year>2008</year>). &#x201C;<article-title>Learnabiltiy of sound cues for environmental features: auditory icons, earcons, spearcons, and speech</article-title>,&#x201D; in <source><italic>Proceedings of the 14th International Conference on Auditory Display</italic></source>, <publisher-loc>Paris</publisher-loc>.</citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Edworthy</surname> <given-names>J.</given-names></name> <name><surname>Reid</surname> <given-names>S.</given-names></name> <name><surname>McDougall</surname> <given-names>S.</given-names></name> <name><surname>Edworthy</surname> <given-names>J.</given-names></name> <name><surname>Hall</surname> <given-names>S.</given-names></name> <name><surname>Bennett</surname> <given-names>D.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>The recognizability and localizability of auditory alarms: setting global medical device standards.</article-title> <source><italic>Hum. Factors</italic></source> <volume>59</volume> <fpage>1108</fpage>&#x2013;<lpage>1127</lpage>. <pub-id pub-id-type="doi">10.1177/0018720817712004</pub-id> <pub-id pub-id-type="pmid">28574734</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Edworthy</surname> <given-names>J.</given-names></name> <name><surname>Reid</surname> <given-names>S.</given-names></name> <name><surname>Peel</surname> <given-names>K.</given-names></name> <name><surname>Lock</surname> <given-names>S.</given-names></name> <name><surname>Williams</surname> <given-names>J.</given-names></name> <name><surname>Newbury</surname> <given-names>C.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>The impact of workload on the ability to localize audible alarms.</article-title> <source><italic>Appl. Ergon.</italic></source> <volume>72</volume> <fpage>88</fpage>&#x2013;<lpage>93</lpage>. <pub-id pub-id-type="doi">10.1016/j.apergo.2018.05.006</pub-id> <pub-id pub-id-type="pmid">29885730</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Egeth</surname> <given-names>H.</given-names></name> <name><surname>Kahneman</surname> <given-names>D.</given-names></name></person-group> (<year>1975</year>). <article-title>Attention and effort.</article-title> <source><italic>Am. J. Psychol.</italic></source> <volume>88</volume>:<issue>339</issue>. <pub-id pub-id-type="doi">10.2307/1421603</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ellermeier</surname> <given-names>W.</given-names></name> <name><surname>Zimmer</surname> <given-names>K.</given-names></name></person-group> (<year>1997</year>). <article-title>Individual differences in susceptibility to the &#x201C;irrelevant speech effect&#x201D;.</article-title> <source><italic>J. Acoust. Soc. Am.</italic></source> <volume>102</volume> <fpage>2191</fpage>&#x2013;<lpage>2199</lpage>. <pub-id pub-id-type="doi">10.1121/1.419596</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Garzonis</surname> <given-names>S.</given-names></name> <name><surname>Jones</surname> <given-names>S.</given-names></name> <name><surname>Jay</surname> <given-names>T.</given-names></name> <name><surname>O&#x2019;Neill</surname> <given-names>E.</given-names></name></person-group> (<year>2009</year>). &#x201C;<article-title>Auditory icon and earcon mobile service notifications: intuitiveness, learnability, memorability and preference</article-title>,&#x201D; in <source><italic>Proceedings of the 27th International Conference on Human Factors in Computing System, Boston, MA</italic></source>, <role>eds</role> <person-group person-group-type="editor"><name><surname>Greenberg</surname> <given-names>S.</given-names></name> <name><surname>Hudson</surname> <given-names>S. E.</given-names></name> <name><surname>Hinkley</surname> <given-names>K.</given-names></name> <name><surname>RingelMorris</surname> <given-names>M.</given-names></name> <name><surname>Olsen</surname> <given-names>D. R.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>). <pub-id pub-id-type="doi">10.1145/1518701.1518932</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gaver</surname> <given-names>W. W.</given-names></name></person-group> (<year>1989</year>). <article-title>The SonicFinder: an interface that uses auditory icons.</article-title> <source><italic>Hum. Comput. Interact.</italic></source> <volume>4</volume> <fpage>67</fpage>&#x2013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1207/s15327051hci0401_3</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hart</surname> <given-names>T.</given-names></name> <name><surname>Hawkey</surname> <given-names>K.</given-names></name> <name><surname>Whyte</surname> <given-names>J.</given-names></name></person-group> (<year>2002</year>). <article-title>Use of a portable voice organizer to remember therapy goals in traumatic brain injury rehabilitation: a within-subjects trial.</article-title> <source><italic>J. Head Trauma Rehabil.</italic></source> <volume>17</volume> <fpage>556</fpage>&#x2013;<lpage>570</lpage>. <pub-id pub-id-type="doi">10.1097/00001199-200212000-00007</pub-id> <pub-id pub-id-type="pmid">12802246</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hein</surname> <given-names>G.</given-names></name> <name><surname>Schubert</surname> <given-names>T.</given-names></name> <name><surname>Cramon</surname> <given-names>D.</given-names></name></person-group> (<year>2005</year>). <article-title>Closed head injury and perceptual processing in dual-task situations.</article-title> <source><italic>Exp. Brain Res.</italic></source> <volume>160</volume> <fpage>223</fpage>&#x2013;<lpage>234</lpage>. <pub-id pub-id-type="doi">10.1007/s00221-004-2006-y</pub-id> <pub-id pub-id-type="pmid">15338087</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hudjetz</surname> <given-names>A.</given-names></name> <name><surname>Oberauer</surname> <given-names>K.</given-names></name></person-group> (<year>2007</year>). <article-title>The effects of processing time and processing rate on forgetting in working memory: testing four models of the complex span paradigm.</article-title> <source><italic>Mem. Cognit.</italic></source> <volume>35</volume> <fpage>1675</fpage>&#x2013;<lpage>1684</lpage>. <pub-id pub-id-type="doi">10.3758/BF03193501</pub-id> <pub-id pub-id-type="pmid">18062545</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hughes</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Auditory distraction: a duplex-mechanism account.</article-title> <source><italic>PsyCh J.</italic></source> <volume>3</volume> <fpage>30</fpage>&#x2013;<lpage>41</lpage>. <pub-id pub-id-type="doi">10.1002/pchj.44</pub-id> <pub-id pub-id-type="pmid">26271638</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hughes</surname> <given-names>R. W.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>2005</year>). <article-title>The impact of order incongruence between a task-irrelevant auditory sequence and a task-relevant visual sequence.</article-title> <source><italic>J. Exp. Psychol. Hum. Percept. Perform.</italic></source> <volume>31</volume> <fpage>316</fpage>&#x2013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1037/0096-1523.31.2.316</pub-id> <pub-id pub-id-type="pmid">15826233</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hughes</surname> <given-names>R. W.</given-names></name> <name><surname>Vachon</surname> <given-names>F.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>2007</year>). <article-title>Disruption of short-term memory by changing and deviant sounds: support for a duplex-mechanism account of auditory distraction.</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>33</volume> <fpage>1050</fpage>&#x2013;<lpage>1061</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.33.6.1050</pub-id> <pub-id pub-id-type="pmid">17983312</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Isherwood</surname> <given-names>S. J.</given-names></name> <name><surname>McKeown</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>Semantic congruency of auditory warnings.</article-title> <source><italic>Ergonomics</italic></source> <volume>60</volume> <fpage>1014</fpage>&#x2013;<lpage>1023</lpage>. <pub-id pub-id-type="doi">10.1080/00140139.2016.1237677</pub-id> <pub-id pub-id-type="pmid">27650392</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jeon</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>An exploration of semiotics of new auditory displays: a comparative analysis with visual displays</article-title>,&#x201D; in <source><italic>Proceedings of the 21st International Conference on Auditory Display (ICAD-2015)</italic></source>, <publisher-loc>Graz</publisher-loc>.</citation></ref>
<ref id="B37"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>D.</given-names></name> <name><surname>Macken</surname> <given-names>W.</given-names></name></person-group> (<year>1995</year>). <article-title>Phonological similarity in the irrelevant speech effect: within- or between-stream similarity?</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>21</volume> <fpage>103</fpage>&#x2013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.21.1.103</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>D.</given-names></name> <name><surname>Macken</surname> <given-names>W.</given-names></name> <name><surname>Mosdell</surname> <given-names>N.</given-names></name></person-group> (<year>1997</year>). <article-title>The role of habituation in the disruption of recall performance by irrelevant sound.</article-title> <source><italic>Br. J. Psychol.</italic></source> <volume>88</volume> <fpage>549</fpage>&#x2013;<lpage>564</lpage>. <pub-id pub-id-type="doi">10.1111/j.2044-8295.1997.tb02657.x</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>D.</given-names></name> <name><surname>Madden</surname> <given-names>C.</given-names></name> <name><surname>Miles</surname> <given-names>C.</given-names></name></person-group> (<year>1992</year>). <article-title>Privileged access by irrelevant speech to short-term memory: the role of changing state.</article-title> <source><italic>Q. J. Exp. Psychol. A</italic></source> <volume>44</volume> <fpage>645</fpage>&#x2013;<lpage>669</lpage>. <pub-id pub-id-type="doi">10.1080/14640749208401304</pub-id> <pub-id pub-id-type="pmid">1615168</pub-id></citation></ref>
<ref id="B40"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jones</surname> <given-names>D. M.</given-names></name> <name><surname>Beaman</surname> <given-names>C. P.</given-names></name> <name><surname>Macken</surname> <given-names>W. J.</given-names></name></person-group> (<year>1996</year>). &#x201C;<article-title>The object-oriented episodic record model</article-title>,&#x201D; in <source><italic>Models of Short-Term Memory</italic></source>, <role>ed.</role> <person-group person-group-type="editor"><name><surname>Gathercole</surname> <given-names>S. E.</given-names></name></person-group> (<publisher-loc>London</publisher-loc>: <publisher-name>Psychology Press</publisher-name>), <fpage>209</fpage>&#x2013;<lpage>238</lpage>. <pub-id pub-id-type="doi">10.3758/bf03196387</pub-id> <pub-id pub-id-type="pmid">16082821</pub-id></citation></ref>
<ref id="B41"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kattner</surname> <given-names>F.</given-names></name> <name><surname>Meinhardt</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Dissociating the disruptive effects of irrelevant music and speech on serial recall of tonal and verbal sequences.</article-title> <source><italic>Front. Psychol.</italic></source> <volume>11</volume>:<issue>346</issue>. <pub-id pub-id-type="doi">10.3389/fpsyg.2020.00346</pub-id> <pub-id pub-id-type="pmid">32194487</pub-id></citation></ref>
<ref id="B42"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Klatte</surname> <given-names>M.</given-names></name> <name><surname>Kilcher</surname> <given-names>H.</given-names></name> <name><surname>Hellbr&#x00FC;ck</surname> <given-names>J.</given-names></name></person-group> (<year>1995</year>). <article-title>Wirkungen der zeitlichen struktur von hintergrundschall auf das arbeitsged&#x00E4;chtnis und ihre theoretischen und praktischen implikationen.</article-title> <source><italic>Z. Exp. Psychol.</italic></source> <volume>42</volume> <fpage>517</fpage>&#x2013;<lpage>544</lpage>.</citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lacherez</surname> <given-names>P.</given-names></name> <name><surname>Donaldson</surname> <given-names>L.</given-names></name> <name><surname>Burt</surname> <given-names>J. S.</given-names></name></person-group> (<year>2016</year>). <article-title>Do learned alarm sounds interfere with working memory?</article-title> <source><italic>Hum. Factors J. Hum. Factors Ergon. Soc.</italic></source> <volume>58</volume> <fpage>1044</fpage>&#x2013;<lpage>1051</lpage>. <pub-id pub-id-type="doi">10.1177/0018720816662733</pub-id> <pub-id pub-id-type="pmid">27576466</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Larsson</surname> <given-names>P.</given-names></name> <name><surname>Niemand</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Using sound to reduce visual distraction from in-vehicle human-machine interfaces.</article-title> <source><italic>Traffic Inj. Prev.</italic></source> <volume>16</volume> <fpage>S25</fpage>&#x2013;<lpage>S30</lpage>. <pub-id pub-id-type="doi">10.1080/15389588.2015.1020111</pub-id> <pub-id pub-id-type="pmid">26027972</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lecompte</surname> <given-names>D. C.</given-names></name></person-group> (<year>1995</year>). <article-title>An irrelevant speech effect with repeated and continuous background speech.</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>2</volume> <fpage>391</fpage>&#x2013;<lpage>397</lpage>. <pub-id pub-id-type="doi">10.3758/BF03210978</pub-id> <pub-id pub-id-type="pmid">24203721</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>S. Y. W.</given-names></name> <name><surname>Tang</surname> <given-names>T.-L.</given-names></name> <name><surname>Hickling</surname> <given-names>A.</given-names></name> <name><surname>Yau</surname> <given-names>S.</given-names></name> <name><surname>Brecknell</surname> <given-names>B.</given-names></name> <name><surname>Sanderson</surname> <given-names>P. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Spearcons for patient monitoring: laboratory investigation comparing earcons and spearcons.</article-title> <source><italic>Hum. Factors</italic></source> <volume>59</volume> <fpage>765</fpage>&#x2013;<lpage>781</lpage>. <pub-id pub-id-type="doi">10.1177/0018720817697536</pub-id> <pub-id pub-id-type="pmid">28570832</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ljungberg</surname> <given-names>J. K.</given-names></name> <name><surname>Parmentier</surname> <given-names>F.</given-names></name></person-group> (<year>2012</year>). <article-title>The impact of intonation and valence on objective and subjective attention capture by auditory alarms.</article-title> <source><italic>Hum. Factors</italic></source> <volume>54</volume> <fpage>826</fpage>&#x2013;<lpage>837</lpage>. <pub-id pub-id-type="doi">10.1177/0018720812438613</pub-id> <pub-id pub-id-type="pmid">23156626</pub-id></citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Logie</surname> <given-names>R. H.</given-names></name> <name><surname>Zucco</surname> <given-names>G. M.</given-names></name> <name><surname>Baddeley</surname> <given-names>A. D.</given-names></name></person-group> (<year>1990</year>). <article-title>Interference with visual short-term memory.</article-title> <source><italic>Acta Psychol.</italic></source> <volume>75</volume> <fpage>55</fpage>&#x2013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.1016/0001-6918(90)90066-O</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ludovico</surname> <given-names>L. A.</given-names></name> <name><surname>Presti</surname> <given-names>G.</given-names></name></person-group> (<year>2016</year>). <article-title>The sonification space: a reference system for sonification tasks.</article-title> <source><italic>Int. J. Hum. Comput. Stud.</italic></source> <volume>85</volume> <fpage>72</fpage>&#x2013;<lpage>77</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijhcs.2015.08.008</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Macken</surname> <given-names>W. J.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>2003</year>). <article-title>Reification of phonological storage.</article-title> <source><italic>Q. J. Exp. Psychol. A</italic></source> <volume>56</volume> <fpage>1279</fpage>&#x2013;<lpage>1288</lpage>. <pub-id pub-id-type="doi">10.1080/02724980245000052</pub-id> <pub-id pub-id-type="pmid">14578084</pub-id></citation></ref>
<ref id="B51"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Macken</surname> <given-names>W. J.</given-names></name> <name><surname>Phelps</surname> <given-names>F. G.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>2009</year>). <article-title>What causes auditory distraction?</article-title> <source><italic>Psychon. Bull. Rev.</italic></source> <volume>16</volume> <fpage>139</fpage>&#x2013;<lpage>144</lpage>. <pub-id pub-id-type="doi">10.3758/PBR.16.1.139</pub-id> <pub-id pub-id-type="pmid">19145024</pub-id></citation></ref>
<ref id="B52"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McNeer</surname> <given-names>R. R.</given-names></name> <name><surname>Horn</surname> <given-names>D. B.</given-names></name> <name><surname>Bennett</surname> <given-names>C. L.</given-names></name> <name><surname>Edworthy</surname> <given-names>J. R.</given-names></name> <name><surname>Dudaryk</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>Auditory icon alarms are more accurately and quickly identified than current standard melodic alarms in a simulated clinical setting.</article-title> <source><italic>Anesthesiology</italic></source> <volume>129</volume> <fpage>58</fpage>&#x2013;<lpage>66</lpage>. <pub-id pub-id-type="doi">10.1097/ALN.0000000000002234</pub-id> <pub-id pub-id-type="pmid">29698253</pub-id></citation></ref>
<ref id="B53"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meiser</surname> <given-names>T.</given-names></name> <name><surname>Klauer</surname> <given-names>K.</given-names></name></person-group> (<year>1999</year>). <article-title>Working memory and changing-state hypothesis.</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>25</volume> <fpage>1272</fpage>&#x2013;<lpage>1299</lpage>. <pub-id pub-id-type="doi">10.1037//0278-7393.25.5.1272</pub-id></citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Morey</surname> <given-names>C. C.</given-names></name> <name><surname>Hadley</surname> <given-names>L. V.</given-names></name> <name><surname>Buttelmann</surname> <given-names>F.</given-names></name> <name><surname>Koenen</surname> <given-names>T.</given-names></name> <name><surname>Meaney</surname> <given-names>J.-A.</given-names></name> <name><surname>Auyeung</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>The effects of verbal and spatial memory load on children&#x2019;s processing speed.</article-title> <source><italic>Ann. N. Y. Acad. Sci.</italic></source> <volume>1424</volume> <fpage>161</fpage>&#x2013;<lpage>174</lpage>. <pub-id pub-id-type="doi">10.1111/nyas.13653</pub-id> <pub-id pub-id-type="pmid">29707802</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Neath</surname> <given-names>I.</given-names></name> <name><surname>Surprenant</surname> <given-names>A.</given-names></name> <name><surname>LeCompte</surname> <given-names>D.</given-names></name></person-group> (<year>1998</year>). <article-title>Irrelevant speech eliminates the word length effect.</article-title> <source><italic>Mem. Cognit.</italic></source> <volume>26</volume> <fpage>343</fpage>&#x2013;<lpage>354</lpage>. <pub-id pub-id-type="doi">10.3758/BF03201145</pub-id> <pub-id pub-id-type="pmid">9584441</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nittono</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>Background instrumental music and serial recall.</article-title> <source><italic>Percept. Mot. Skills</italic></source> <volume>84</volume> <fpage>1307</fpage>&#x2013;<lpage>1313</lpage>. <pub-id pub-id-type="doi">10.2466/pms.1997.84.3c.1307</pub-id> <pub-id pub-id-type="pmid">9229452</pub-id></citation></ref>
<ref id="B57"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Palladino</surname> <given-names>D. K.</given-names></name> <name><surname>Walker</surname> <given-names>B. N.</given-names></name></person-group> (<year>2007</year>). &#x201C;<article-title>Learning rates for auditory menus enhanced with spearcons versus earcons</article-title>,&#x201D; in <source><italic>Proceedings of the 13th International Conference on Auditory Display (ICAD-2007)</italic></source>, <publisher-loc>Montreal, QC</publisher-loc>, <fpage>274</fpage>&#x2013;<lpage>279</lpage>.</citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Petocz</surname> <given-names>A.</given-names></name> <name><surname>Keller</surname> <given-names>P. E.</given-names></name> <name><surname>Stevens</surname> <given-names>C. J.</given-names></name></person-group> (<year>2008</year>). <article-title>Auditory warnings, signal-referent relations, and natural indicators: re-thinking theory and application.</article-title> <source><italic>J. Exp. Psychol. Appl.</italic></source> <volume>14</volume> <fpage>165</fpage>&#x2013;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1037/1076-898X.14.2.165</pub-id> <pub-id pub-id-type="pmid">18590372</pub-id></citation></ref>
<ref id="B59"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Portrat</surname> <given-names>S.</given-names></name> <name><surname>Camos</surname> <given-names>V.</given-names></name> <name><surname>Barrouillet</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>Working memory in children: a time-constrained functioning similar to adults.</article-title> <source><italic>J. Exp. Child Psychol.</italic></source> <volume>102</volume> <fpage>368</fpage>&#x2013;<lpage>374</lpage>. <pub-id pub-id-type="doi">10.1016/j.jecp.2008.05.005</pub-id> <pub-id pub-id-type="pmid">18632113</pub-id></citation></ref>
<ref id="B60"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Salam&#x00E9;</surname> <given-names>P.</given-names></name> <name><surname>Baddeley</surname> <given-names>A.</given-names></name></person-group> (<year>1987</year>). <article-title>Noise, unattended speech and short-term memory.</article-title> <source><italic>Ergonomics</italic></source> <volume>30</volume> <fpage>1185</fpage>&#x2013;<lpage>1194</lpage>. <pub-id pub-id-type="doi">10.1080/00140138708966007</pub-id> <pub-id pub-id-type="pmid">3691472</pub-id></citation></ref>
<ref id="B61"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sanderson</surname> <given-names>P. M.</given-names></name> <name><surname>Brecknell</surname> <given-names>B.</given-names></name> <name><surname>Leong</surname> <given-names>S.</given-names></name> <name><surname>Klueber</surname> <given-names>S.</given-names></name> <name><surname>Wolf</surname> <given-names>E.</given-names></name> <name><surname>Hickling</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Monitoring vital signs with time-compressed speech.</article-title> <source><italic>J. Exp. Psychol. Appl.</italic></source> <volume>25</volume> <fpage>647</fpage>&#x2013;<lpage>673</lpage>. <pub-id pub-id-type="doi">10.1037/xap0000217</pub-id> <pub-id pub-id-type="pmid">30883150</pub-id></citation></ref>
<ref id="B62"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stevanovski</surname> <given-names>B.</given-names></name> <name><surname>Jolicoeur</surname> <given-names>P.</given-names></name></person-group> (<year>2007</year>). <article-title>Visual short-term memory: central capacity limitations in short-term consolidation.</article-title> <source><italic>Vis. Cogn.</italic></source> <volume>15</volume> <fpage>532</fpage>&#x2013;<lpage>563</lpage>. <pub-id pub-id-type="doi">10.1080/13506280600871917</pub-id></citation></ref>
<ref id="B63"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Strayer</surname> <given-names>D. L.</given-names></name> <name><surname>Johnston</surname> <given-names>W. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Driven to distraction: dual-task studies of simulated driving and conversing on a cellular telephone.</article-title> <source><italic>Psychol. Sci.</italic></source> <volume>12</volume> <fpage>462</fpage>&#x2013;<lpage>466</lpage>. <pub-id pub-id-type="doi">10.1111/1467-9280.00386</pub-id> <pub-id pub-id-type="pmid">11760132</pub-id></citation></ref>
<ref id="B64"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Titova</surname> <given-names>N.</given-names></name> <name><surname>N&#x00E4;&#x00E4;t&#x00E4;nen</surname> <given-names>R.</given-names></name></person-group> (<year>2001</year>). <article-title>Preattentive voice discrimination by the human brain as indexed by the mismatch negativity.</article-title> <source><italic>Neurosci. Lett.</italic></source> <volume>308</volume> <fpage>63</fpage>&#x2013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1016/s0304-3940(01)01970-x</pub-id></citation></ref>
<ref id="B65"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tremblay</surname> <given-names>S.</given-names></name> <name><surname>Jones</surname> <given-names>D. M.</given-names></name></person-group> (<year>1998</year>). <article-title>Role of habituation in the irrelevant sound effect: evidence from the effects of token set size and rate of transition.</article-title> <source><italic>J. Exp. Psychol. Learn. Mem. Cogn.</italic></source> <volume>24</volume> <fpage>659</fpage>&#x2013;<lpage>671</lpage>. <pub-id pub-id-type="doi">10.1037/0278-7393.24.3.659</pub-id></citation></ref>
<ref id="B66"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vergauwe</surname> <given-names>E.</given-names></name> <name><surname>Barrouillet</surname> <given-names>P.</given-names></name> <name><surname>Camos</surname> <given-names>V.</given-names></name></person-group> (<year>2010</year>). <article-title>Do mental processes share a domain-general resource?</article-title> <source><italic>Psychol. Sci.</italic></source> <volume>21</volume> <fpage>384</fpage>&#x2013;<lpage>390</lpage>. <pub-id pub-id-type="doi">10.1177/0956797610361340</pub-id> <pub-id pub-id-type="pmid">20424075</pub-id></citation></ref>
<ref id="B67"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Walker</surname> <given-names>B. N.</given-names></name> <name><surname>Lindsay</surname> <given-names>J.</given-names></name> <name><surname>Nance</surname> <given-names>A.</given-names></name> <name><surname>Nakano</surname> <given-names>Y.</given-names></name> <name><surname>Palladino</surname> <given-names>D. K.</given-names></name> <name><surname>Dingler</surname> <given-names>T.</given-names></name><etal/></person-group> (<year>2013</year>). <article-title>Spearcons (Speech-Based Earcons) improve navigation performance in advanced auditory menus.</article-title> <source><italic>Hum. Factors</italic></source> <volume>55</volume> <fpage>157</fpage>&#x2013;<lpage>182</lpage>. <pub-id pub-id-type="doi">10.1177/0018720812450587</pub-id> <pub-id pub-id-type="pmid">23516800</pub-id></citation></ref>
<ref id="B68"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Watson</surname> <given-names>M.</given-names></name> <name><surname>Sanderson</surname> <given-names>P.</given-names></name> <name><surname>John Russell</surname> <given-names>W.</given-names></name></person-group> (<year>2004</year>). <article-title>Tailoring reveals information requirements: the case of anaesthesia alarms.</article-title> <source><italic>Interact. Comput.</italic></source> <volume>16</volume> <fpage>271</fpage>&#x2013;<lpage>293</lpage>. <pub-id pub-id-type="doi">10.1016/j.intcom.2003.12.002</pub-id></citation></ref>
<ref id="B69"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wickens</surname> <given-names>C. D.</given-names></name></person-group> (<year>2002</year>). <article-title>Multiple resources and performance prediction.</article-title> <source><italic>Theor. Issues Ergon. Sci.</italic></source> <volume>3</volume> <fpage>159</fpage>&#x2013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1080/14639220210123806</pub-id></citation></ref>
<ref id="B70"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wickens</surname> <given-names>C. D.</given-names></name></person-group> (<year>2008</year>). <article-title>Multiple resources and mental workload.</article-title> <source><italic>Hum. Factors</italic></source> <volume>50</volume> <fpage>449</fpage>&#x2013;<lpage>455</lpage>. <pub-id pub-id-type="doi">10.1518/001872008x288394</pub-id> <pub-id pub-id-type="pmid">18689052</pub-id></citation></ref>
<ref id="B71"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wolters</surname> <given-names>M. K.</given-names></name> <name><surname>Isaac</surname> <given-names>K. B.</given-names></name> <name><surname>Doherty</surname> <given-names>J. M.</given-names></name></person-group> (<year>2012</year>). &#x201C;<article-title>Hold that thought: are spearcons less disruptive than spoken reminders?</article-title>,&#x201D; in <source><italic>Proceedings of the CHI&#x2019;12 Extended Abstracts on Human Factors in Computing Systems</italic></source>, <publisher-loc>Austin, TX</publisher-loc>, <fpage>1745</fpage>&#x2013;<lpage>1750</lpage>. <pub-id pub-id-type="doi">10.1145/2212776.2223703</pub-id></citation></ref>
</ref-list>
</back>
</article>