<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. ICT</journal-id>
<journal-title>Frontiers in ICT</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. ICT</abbrev-journal-title>
<issn pub-type="epub">2297-198X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fict.2017.00013</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>ICT</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Virtual Reality Training for Public Speaking&#x02014;A QUEST-VR Framework Validation</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Poeschl</surname> <given-names>Sandra</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="cor1">&#x0002A;</xref>
<uri xlink:href="http://frontiersin.org/people/u/357379"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Media Psychology and Media Design Group, Department of Economic Sciences and Media, Institute for Media and Communication Science, TU Ilmenau</institution>, <addr-line>Ilmenau</addr-line>, <country>Germany</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: John Quarles, University of Texas at San Antonio, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Joseph Gabbard, Virginia Tech, United States; Mar Gonzalez-Franco, Microsoft Research, United States; Bruno Herbelin, &#x000C9;cole Polytechnique F&#x000E9;d&#x000E9;rale de Lausanne, Switzerland</p></fn>
<corresp content-type="corresp" id="cor1">&#x0002A;Correspondence: Sandra Poeschl, <email>sandra.poeschl&#x00040;tu-ilmenau.de</email></corresp>
<fn fn-type="other" id="fn001"><p>Specialty section: This article was submitted to Virtual Environments, a section of the journal Frontiers in ICT</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>19</day>
<month>06</month>
<year>2017</year>
</pub-date>
<pub-date pub-type="collection">
<year>2017</year>
</pub-date>
<volume>4</volume>
<elocation-id>13</elocation-id>
<history>
<date date-type="received">
<day>23</day>
<month>06</month>
<year>2016</year>
</date>
<date date-type="accepted">
<day>03</day>
<month>05</month>
<year>2017</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2017 Poeschl.</copyright-statement>
<copyright-year>2017</copyright-year>
<copyright-holder>Poeschl</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Good public speaking skills are essential in many professions as well as everyday life, but speech anxiety is a common problem. While it is established that public speaking training in virtual reality (VR) is effective, comprehensive studies on the underlying factors that contribute to this success are rare. The &#x0201C;quality evaluation of user-system interaction in virtual reality&#x0201D; framework for evaluation of VR applications is presented that includes system features, user factors, and moderating variables. Based on this framework, variables that are postulated to influence the quality of a public speaking training application were selected for a first validation study. In a cross-sectional, repeated measures laboratory study [<italic>N</italic>&#x02009;&#x0003D;&#x02009;36 undergraduate students; 36% men, 64% women, mean age&#x02009;&#x0003D;&#x02009;26.42&#x02009;years (SD&#x02009;&#x0003D;&#x02009;3.42)], the effects of task difficulty (independent variable), ability to concentrate, fear of public speaking, and social presence (covariates) on public speaking performance (dependent variable) in a virtual training scenario were analyzed, using stereoscopic visualization on a screen. The results indicate that the covariates moderate the effect of task difficulty on speech performance, turning it into a non-significant effect. Further interrelations are explored. The presenter&#x02019;s reaction to the virtual agents in the audience shows a tendency of overlap of explained variance with task difficulty. This underlines the need for more studies dedicated to the interaction of contributing factors for determining the quality of VR public speaking applications.</p>
</abstract>
<kwd-group>
<kwd>virtual reality</kwd>
<kwd>training</kwd>
<kwd>task difficulty</kwd>
<kwd>social presence</kwd>
<kwd>ability to concentrate</kwd>
<kwd>fear of public speaking</kwd>
<kwd>speech performance</kwd>
</kwd-group>
<counts>
<fig-count count="7"/>
<table-count count="5"/>
<equation-count count="0"/>
<ref-count count="59"/>
<page-count count="13"/>
<word-count count="10154"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="introduction">
<title>Introduction</title>
<p>Virtual reality (VR) technology as a tool offers great possibilities for training and therapy purposes. It provides a new and complex human&#x02013;computer interaction paradigm (Nijholt, <xref ref-type="bibr" rid="B31">2014</xref>), since users are no longer &#x0201C;external observers of images on a computer screen but are active participants in a computer-generated three-dimensional (3D) world&#x0201D; (Bowman and Hodges, <xref ref-type="bibr" rid="B5">1999</xref>, p. 37). With VR applications, ecologically valid training and therapy scenarios can be presented that otherwise are hard to realize (for example, for training a presentation in front of a large audience or an audience with a different cultural background). Especially in comparison with traditional methods, they provide further advantages: stimulus presentation can be controlled and adapted to the clients&#x02019; progress, the scenarios are safe and minimize consequences of mistakes and are, therefore, often more acceptable, and virtual agents can be integrated into applications that aim at the training of social interactions (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B57">2005b</xref>).</p>
<p>To date, a considerable amount of research was conducted on VR applications in a clinical context, investigating the use of virtual reality exposure therapy (VRET) for anxiety disorders. Meta-analyses show on the one hand that VRET leads to considerable reduction of negative affective symptoms for anxiety disorders and phobias like posttraumatic stress disorder (PTSD), social phobia, arachnophobia, acrophobia, panic disorder with agoraphobia, and aviophobia (Parsons and Rizzo, <xref ref-type="bibr" rid="B35">2008</xref>). On the other hand, VRET also seems to be a promising intervention compared to classical evidence-based treatments for anxiety disorders. A meta-analysis by Opris et al. (<xref ref-type="bibr" rid="B33">2012</xref>) analyzed VRET outcomes for fear of flying, panic disorder/agoraphobia, social phobia, arachnophobia, acrophobia, and PTSD. The findings show that VRET leads to better outcomes than waiting list control. Further, VRET shows similar efficacy than classical interventions without VR exposure and comparable real-life impact with a good stability over time. Similar results have been obtained by another meta-analysis on VRET for specific phobias, social phobia, PTSD, and panic disorder, showing even a small effect size in favor of VRET over <italic>in vivo</italic> exposure (Powers and Emmelkamp, <xref ref-type="bibr" rid="B39">2008</xref>). However, moderator analyses on these meta-analytic effects (e.g., influence of presence, immersion, or demographics) are often limited due to inconsistent reporting in the literature (Parsons and Rizzo, <xref ref-type="bibr" rid="B35">2008</xref>).</p>
<p>One upcoming application context for VR social anxiety applications is <italic>public speaking therapy and training applications</italic>. Good public speaking skills are nowadays important for many professions and in everyday life. However, they require extensive training (Chollet et al., <xref ref-type="bibr" rid="B11">2015</xref>). At the same time, fear of public speaking or public speaking anxiety is one of the most common social phobias in the world (Lee et al., <xref ref-type="bibr" rid="B22">2002</xref>). It is characterized by anxiety even prior to or at the thought of having to communicate verbally with any group of people. For phobic people, it even leads to avoidance of such events that focus the group&#x02019;s attention on themselves. Fear of public speaking can lead to physical distress and even panic (Rothwell, <xref ref-type="bibr" rid="B42">2004</xref>) and lower speech performance (Menzel and Carrell, <xref ref-type="bibr" rid="B27">1994</xref>). State anxiety needs to be distinguished from trait anxiety as a &#x0201C;personality trait,&#x0201D; though: state anxiety is &#x0201C;dependent upon both the person (trait anxiety) and the stressful situation&#x0201D; (Endler and Kocovski, <xref ref-type="bibr" rid="B13">2001</xref>, p. 242). This means that the anxiety or fear experienced and triggered in a specific situation like giving a public speech should be considered and assessed as state anxiety (Menzel and Carrell, <xref ref-type="bibr" rid="B27">1994</xref>). Treatment involves cognitive behavioral therapy (CBT), which includes exposure to fear-triggering stimuli (e.g., speaking in front of a group), reframing thoughts associated with the social scene, social skills training, and relaxation training (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B59">2005a</xref>). Clinical VR public speaking applications are well researched. The findings are in line with state of research on anxiety disorders discussed above. Virtual audiences can induce anxiety in phobic and non-phobic people (Pertaub et al., <xref ref-type="bibr" rid="B36">2002</xref>; Slater et al., <xref ref-type="bibr" rid="B49">2006</xref>). Further, repeated exposure to a virtual audience can result in reduction of fear of public speaking symptoms (Wallach et al., <xref ref-type="bibr" rid="B55">2009</xref>). In general, virtual fear of public speaking applications can be considered as an effective supplement in CBT, especially when compared to waiting list control conditions (Wallach et al., <xref ref-type="bibr" rid="B55">2009</xref>). Recent findings suggest that VR public speaking applications might be a promising tool for <italic>training</italic> also: public speaking training applications with high simulation fidelity that depict realistic audiences do not only lead to higher presence (which is defined as the user&#x02019;s psychological response to a VR system; Slater, <xref ref-type="bibr" rid="B47">2003</xref>) and performance, but also to better transfer of gained skills into practice (Kothgassner et al., <xref ref-type="bibr" rid="B19">2012</xref>).</p>
<p>Given the increasing distribution of VR public speaking applications, the need for systematic evaluation of their quality arises. High quality applications fulfill their expected purpose, with as few resources possible in a satisfying way (see Section on &#x0201C;<xref ref-type="sec" rid="S2-1-5">Quality</xref>&#x0201D;). In the case of public speaking, training in VR should increase speech performance to be successful and effective. Various factors determine training success, which will be discussed in the following paragraphs.</p>
<p>Virtual reality training applications provide tasks to be fulfilled by the users. One of the most important task aspects is <italic>task difficulty</italic> (Sheridan, <xref ref-type="bibr" rid="B46">1992</xref>). It is widely recognized and implemented in VR applications, especially in rehabilitation/training (Sveistrup, <xref ref-type="bibr" rid="B54">2004</xref>) or assessment applications (Negu&#x00163; et al., <xref ref-type="bibr" rid="B30">2016</xref>). A recent meta-analysis compared task difficulty of VR assessment tools for cognitive performance with paper-pencil and computerized measures. The findings suggest that tasks in VR have on the one hand high ecological validity, as high fidelity VR closely replicates &#x0201C;real world environments with stressors, distractors, and complex stimuli&#x0201D; (Negu&#x00163; et al., <xref ref-type="bibr" rid="B30">2016</xref>; p. 418). On the other hand, they can also have an increased level of complexity compared to tasks in more traditional cognitive performance measures for the same reasons. They afford more cognitive resources, as a larger amount of information needs to be manipulated and processed while fulfilling assessment tasks (Negu&#x00163; et al., <xref ref-type="bibr" rid="B30">2016</xref>). However, design of the virtual environment plays a role: poor display and/or interaction fidelity can decrease task performance, whereas good design might lower task difficulty and lead to better performance (Stickel et al., <xref ref-type="bibr" rid="B52">2010</xref>; McMahan et al., <xref ref-type="bibr" rid="B26">2012</xref>). This highlights the importance of guided design (Bowman and Hodges, <xref ref-type="bibr" rid="B5">1999</xref>). Further, task difficulty is related not only to performance, but also to presence in VR, because all these variables depend on the allocation of cognitive resources. Given the limitation of human cognitive resources, those allocated to the task at hand cannot be invested, for example, in the experience of presence (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>). State of research shows mixed results in this aspect: simple and highly automated tasks will probably not require a high level of presence in order to show high performance. More complicated tasks show a differentiated pattern: on the one hand, they seem to have a negative effect on presence (Riley, <xref ref-type="bibr" rid="B41">2001</xref>; Slater et al., <xref ref-type="bibr" rid="B50">1998</xref>), as more cognitive resources are allocated to the task and less to the environment (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>). On the other hand, several findings suggest that tasks demanding many attentional resources may result in higher levels of presence in VR and maybe even performance (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>). Transferred to the public speaking context, task difficulty is constituted of several dimensions, for example, the content of the speech (e.g., giving a talk on countries visited during a vacation vs. presenting the results of a scientific study), preparation (how much time was invested in preparing and rehearsing the talk; Menzel and Carrell, <xref ref-type="bibr" rid="B27">1994</xref>), presentation (reading from a script vs. talking freely), and audience characteristics [e.g., formal or casual audience members; see also Morreale et al. (<xref ref-type="bibr" rid="B28">2007</xref>)]. Those specific task difficulty dimensions for public speaking and their role in VR applications have not been studied to date.</p>
<p>Against the background of public speaking training applications, <italic>ability to concentrate</italic> on the task at hand in VR is relevant (Schuemie et al., <xref ref-type="bibr" rid="B44">2001</xref>; Sacau et al., <xref ref-type="bibr" rid="B43">2008</xref>). Given the relation to cognitive resource allocation for attention and concentration, ability to concentrate is a highly relevant user state for task performance, and can influence presence (Draper et al., <xref ref-type="bibr" rid="B12">1998</xref>; MacEdonio et al., <xref ref-type="bibr" rid="B25">2007</xref>). VR applications represent tools for diagnosis and therapy with high ecological validity and effectiveness for disorders related to attention and concentration, like attention deficit disorder (Cho et al., <xref ref-type="bibr" rid="B10">2002</xref>; Anton et al., <xref ref-type="bibr" rid="B1">2009</xref>) as well as memory training (Optale et al., <xref ref-type="bibr" rid="B34">2010</xref>). Studies with non-clinical samples on ability to concentrate are uncommon, though.</p>
<p>Further, <italic>presence</italic> is one of the most researched constructs in VR applications. Besides its function in training applications (Kothgassner et al., <xref ref-type="bibr" rid="B19">2012</xref>), presence is regarded to have a key role in VR therapy for anxiety disorders (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B57">2005b</xref>). As already briefly defined above, presence can be described as a user&#x02019;s subjective psychological response to a VR system or the sense of &#x0201C;being there&#x0201D; (Reeves, <xref ref-type="bibr" rid="B40">1991</xref>; Slater, <xref ref-type="bibr" rid="B47">2003</xref>). Researchers agree that presence should trigger the experience of fear and anxiety in virtual environments for phobia treatment and training (Ling et al., <xref ref-type="bibr" rid="B23">2014</xref>). Inducing these emotional states is crucial for clients to confront them and train certain skills to overcome their fear (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B58">1998</xref>). However, recent research revealed that the correlation between presence measures and anxiety showed mixed results and differed between phobias (Ling et al., <xref ref-type="bibr" rid="B23">2014</xref>). A recent meta-analysis (Ling et al., <xref ref-type="bibr" rid="B23">2014</xref>) even showed a null-effect for social anxiety [see also Felnhofer et al. (<xref ref-type="bibr" rid="B14">2014</xref>)]. Ling et al. (<xref ref-type="bibr" rid="B23">2014</xref>) argue that &#x0201C;one might conclude that subjective presence measures do not capture the essential sense of presence that is responsible for activating fear related to social anxiety in individuals&#x0201D; (p. 8f.), but rather virtual presence or place illusion (Slater, <xref ref-type="bibr" rid="B48">2009</xref>). In the case of fear of public speaking as a sub-form of social anxiety, &#x0201C;a simulation should include virtual human behavior actions that can be used as indicators for positive or negative human evaluation&#x0201D; (Poeschl and Doering, <xref ref-type="bibr" rid="B38">2015</xref>, p. 59). These aspects then have to be acknowledged in presence measures as well. The concept of social presence (SP) (Nowak and Biocca, <xref ref-type="bibr" rid="B32">2003</xref>) meets this requirement. Youngblut (<xref ref-type="bibr" rid="B60">2003</xref>) defines SP as follows: &#x0201C;Social presence occurs when users feel that a form, behavior, or sensory experience indicates the presence of another individual. The amount of social presence is the degree to which a user feels access to the intelligence, intentions, and sensory impressions of another&#x0201D; (p. 4). The &#x0201C;other&#x0201D; named in the definition not only addresses other human beings, but also computer-generated agents (Youngblut, <xref ref-type="bibr" rid="B60">2003</xref>). SP acknowledges personal interaction, including the sub-dimensions of co-presence (as a prerequisite), psychological involvement, and behavioral engagement (Biocca et al., <xref ref-type="bibr" rid="B3">2001</xref>).</p>
<p>As can be seen from state of research, the quality of a VR public speaking application is a function of various factors. Given the well-established research and development as well as the increasing implementation of such public speaking applications, there is a need for integrative approaches to evaluate VR social anxiety treatments or trainings. This paper tries to take into account the factors discussed above. In order to integrate determinants that influence the quality and thereby the success of VR training applications, the &#x0201C;quality evaluation of user-system interaction in virtual reality&#x0201D; (QUEST-VR; see Section &#x0201C;<xref ref-type="sec" rid="S2-1">QUEST-VR Framework</xref>&#x0201D;) framework was developed and validated in parts by the presented study. For example, fidelity aspects and user traits are not regarded further due to research economic reasons. Based on the framework, four factors discussed above that contribute to public speaking performance (outcome) were selected for evaluation of a VR public speaking training environment. Task difficulty (system factor) and ability to concentrate (user state) were selected, because state of research shows that these aspects affect performance in real-life, and (social) presence (moderating factor) is claimed to be a key factor in VR scenarios. State fear of public speaking (moderating factor) is a further control variable as such an environment is prone to induce this emotional state during the interaction, which lowers speech performance. Speech performance is considered the outcome variable and used as an indicator for training effectiveness and thereby the application&#x02019;s quality.</p>
<p>The following hypotheses were derived based on the current state of research and the QUEST-VR framework:
<list list-type="simple">
<list-item><p>H1: SP, fear of public speaking, and ability to concentrate correlate with speech-giving performance in VR.</p></list-item>
<list-item><p>H2: High task difficulty (speech-giving without preparation) leads to lower public speaking performance in VR than low task difficulty (speech-giving with preparation).</p></list-item>
<list-item><p>H3: SP, fear of public speaking, and ability to concentrate influence the relation between task difficulty and speech-giving performance in VR.</p></list-item>
</list></p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec id="S2-1">
<title>QUEST-VR Framework</title>
<p>Given the increasing application of clinical and non-clinical VR social anxiety training applications, there is a need for comprehensive evaluation approaches. The QUEST-VR<xref ref-type="fn" rid="fn1"><sup>1</sup></xref> framework was developed in order to systematically include various determinants that influence the quality and thereby the success of VR training applications.</p>
<p>The framework includes <italic>system</italic> and <italic>user characteristics</italic> as well as the <italic>system-user interaction</italic> and <italic>moderating factors</italic> (factors that result from the actual use situation) as determinants of a VR application&#x02019;s <italic>quality</italic> (see Figure <xref ref-type="fig" rid="F1">1</xref>). The factors selected for the empirical study that validated the framework are also provided in Figure <xref ref-type="fig" rid="F1">1</xref>.</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p><bold>The QUEST-VR (quality evaluation of user-system interaction in virtual reality) framework (see text footnote 1), including variables selected for the validation study</bold>. Main concepts of the framework are highlighted in grey.</p></caption>
<graphic xlink:href="fict-04-00013-g001.tif"/>
</fig>
<sec id="S2-1-1">
<title>System</title>
<p><italic>System</italic> features are factors that can be directly designed and manipulated. They comprise a system&#x02019;s application context (Bowman et al., <xref ref-type="bibr" rid="B6">2005</xref>), task characteristics (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>), and the system&#x02019;s fidelity (Bowman and McMahan, <xref ref-type="bibr" rid="B7">2007</xref>, see Figure <xref ref-type="fig" rid="F1">1</xref>). A suitable design is a necessary requirement for satisfactory use and outcomes of a specific system, as for example, poor user interaction in VR can decrease performance (Stickel et al., <xref ref-type="bibr" rid="B52">2010</xref>; McMahan et al., <xref ref-type="bibr" rid="B26">2012</xref>). Therefore, these factors are usually already considered in the design process (Bowman and Hodges, <xref ref-type="bibr" rid="B5">1999</xref>).</p>
<p>The <italic>application context</italic> determines what the specific application serves for. The specific tasks that have to be fulfilled by the users (Bowman et al., <xref ref-type="bibr" rid="B4">2008</xref>) as well as the <italic>task characteristics</italic> (e.g., several levels of difficulty) are deducted from the specific context.</p>
<p><italic>Fidelity</italic> or immersion is defined as &#x0201C;the objective level of sensory fidelity a VR system provides&#x0201D; (Slater, <xref ref-type="bibr" rid="B47">2003</xref>). Fidelity can be further divided into display fidelity (Bowman and McMahan, <xref ref-type="bibr" rid="B7">2007</xref>), interaction fidelity (McMahan et al., <xref ref-type="bibr" rid="B26">2012</xref>), and simulation fidelity (Lee et al., <xref ref-type="bibr" rid="B21">2013</xref>). Fidelity aspects can affect user experience (e.g., presence) as well as user performance (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>).</p>
<p>For the validation study, public speaking training served as the application context and preparation as an aspect of task difficulty (see Figure <xref ref-type="fig" rid="F1">1</xref>). All participants trained in the same virtual environment that consisted of a virtual audience only (without prompting for example), in order to investigate this aspect of task difficulty without further influences.</p>
</sec>
<sec id="S2-1-2">
<title>User</title>
<p>The <italic>user component</italic> (see Figure <xref ref-type="fig" rid="F1">1</xref>) covers biological, physical, psychological, and social characteristics of users and is based on the human factors definitions by Chapanis (<xref ref-type="bibr" rid="B8">1991</xref>) and Stramler (<xref ref-type="bibr" rid="B53">1993</xref>). It includes human capabilities as well as human limitations that are relevant for safe, comfortable, and effective design, operation, or use of products or systems. These variables can be further categorized as traits (enduring personal qualities or attributes that influence behavior across situations) and states (temporary internal characteristics; Chaplin et al., <xref ref-type="bibr" rid="B9">1988</xref>).</p>
<p>Relevant user traits are adaptability, prior experience with VR, susceptibility to immersion, and socio-demographic variables like gender or age (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>; Youngblut, <xref ref-type="bibr" rid="B60">2003</xref>). Several <italic>states</italic> have been researched in relation to VR applications: relevant are, for example, motivation to interact with VR, attention resources, and identification with an avatar [for an overview, see Nash et al. (<xref ref-type="bibr" rid="B29">2000</xref>) and Youngblut (<xref ref-type="bibr" rid="B60">2003</xref>)].</p>
<p>For the validation study, ability to concentrate (see Figure <xref ref-type="fig" rid="F1">1</xref>) on the task at hand in VR was chosen as a user state.</p>
</sec>
<sec id="S2-1-3">
<title>User-System Interaction</title>
<p>The third component of the QUEST-VR framework is the <italic>user-system interaction</italic> (see Figure <xref ref-type="fig" rid="F1">1</xref>), representing the actual use of the system by a user. Within the use situation, users experience the system and, as a result, display certain behavioral actions that are related to performance. The displayed behavior is the result of the interaction between dispositional and situational variables in the specific use situation (Larsen and Buss, <xref ref-type="bibr" rid="B20">2013</xref>).</p>
</sec>
<sec id="S2-1-4">
<title>Moderating Factors</title>
<p>The effect the user-system interaction has on quality measures of a VR application can be influenced by <italic>moderating</italic> or <italic>mediating factors</italic>. These factors directly result from the interaction. User-system interaction can lead to &#x0201C;side effects,&#x0201D; which can be intended (e.g., presence) or not (e.g., cyber-sickness). Those effects play either <italic>a moderating or a mediating</italic> role and influence the effects of user-system interaction on the quality measures (see Figure <xref ref-type="fig" rid="F1">1</xref>).</p>
<p>In this study, state fear of public speaking and SP were analyzed as moderating factors that resulted directly from the use situation.</p>
</sec>
<sec id="S2-1-5">
<title>Quality</title>
<p>The <italic>quality</italic> of a VR application represents the outcome in the QUEST-VR framework (see Figure <xref ref-type="fig" rid="F1">1</xref>). Quality is defined by the International Organization for Standardization (ISO) as the &#x0201C;degree to which a set of inherent characteristics fulfills requirements&#x0201D; (International Organization for Standardization, <xref ref-type="bibr" rid="B18">2015</xref>). The degree represents the level to which a product or service satisfies, which can be deemed, for example, as good or poor quality of a product. For VR applications, this can be broken down into further aspects of quality that are also known from a usability context (effectiveness, efficiency, and satisfaction; ISO 9241-11, Part 11; International Organization for Standardization, <xref ref-type="bibr" rid="B17">1998</xref>). A VR training system shows high quality when the expected purpose of the application is fulfilled (increase in performance, see Figure <xref ref-type="fig" rid="F1">1</xref>) with as few resources as possible, and when system usage is satisfying for the users. In the validation study, public speaking performance as a measure of training effectiveness was selected as outcome variable.</p>
</sec>
</sec>
<sec id="S2-2">
<title>Research Design</title>
<p>A cross-sectional repeated measures laboratory study was conducted. Task difficulty (low vs. high, within-subject factor) constituted the independent variable. A within-subject design was chosen in order to reduce participant based error variance and, therefore, to increase test power. Observed speech performance behavior served as the dependent variable; SP, state fear of public speaking, and ability to concentrate were acknowledged as control variables. The study was designed, implemented, and conducted according to the guidelines of the APA research ethics committee.</p>
<p>Low task difficulty (first exposure) was implemented as speech-giving with preparation: participants received an article about the town where the study took place and the participants lived. The article was based on the respective Wikipedia article. The material handed out to participants is provided as Supplementary Material. They were given 10&#x02009;min to prepare a short speech about the town based on this article and were allowed to take notes that they could also use during the speech. They then delivered a speech of a maximum of 5&#x02009;min.</p>
<p>High task difficulty (second exposure) was constituted as speech-giving without preparation: directly after the first task, participants were asked to deliver a speech of again 5&#x02009;min about their hometown. Subjects received a guideline consisting of bullet points comparable to the content of the article for the first task, which is also provided as Supplementary Material. The task had to be fulfilled immediately without further preparation or notes.</p>
<p>In order to control for variability of prior special knowledge about the residential town (low difficulty) and hometown (high difficulty) as best as possible, the article as well as the bullet points covered a wide range of information (geography and demographics, schools and institutions, history, tourism and sights, and museums).</p>
<p>Sequence of tasks was not counter-balanced, because fulfilling the hard task before the easy task would have made the low task difficulty condition even easier due to practice effects. However, this means that learning effects from the easy task condition to the hard task condition are possible. Therefore, further statistical differentiation of these effects from task difficulty effects is needed (see Section &#x0201C;<xref ref-type="sec" rid="S3">Results</xref>&#x0201D;).</p>
</sec>
<sec id="S2-3">
<title>Participants</title>
<p>An <italic>ad hoc</italic> sample with a total of <italic>N</italic>&#x02009;&#x0003D;&#x02009;37 undergraduate students at a mid-sized university in Germany were recruited by personal invitations, email, and a Facebook fan page. One participant was excluded due to a damaged video recording. The final sample consisted of <italic>N</italic>&#x02009;&#x0003D;&#x02009;36 participants [36% men, 64% women, mean age&#x02009;&#x0003D;&#x02009;26.42&#x02009;years (<italic>SD</italic>&#x02009;&#x0003D;&#x02009;3.42)].</p>
</sec>
<sec id="S2-4">
<title>Measures</title>
<p>The participants&#x02019; speech-giving performance was video-recorded and rated by four independent coders. The speech evaluation form (Lucas, <xref ref-type="bibr" rid="B24">2016</xref>), which is a standardized behavioral observation system, was used to rate speech behavior. The coders were trained in using the system to ensure sufficient reliability. Due to the study design, only the following categories that could be referred to speech-giving with a predefined topic and the use of prepared materials were used (three-point observation rating scale from 1&#x02009;&#x0003D;&#x02009;poor to 3&#x02009;&#x0003D;&#x02009;excellent performance, mean Spearman&#x02019;s &#x003C1;&#x02009;&#x0003D;&#x02009;0.76). <italic>Introduction of the speech</italic> was rated by clear introduction of the topic, if credibility was established and if the body of the speech was previewed. For the <italic>body of the speech</italic>, making points clear and accurate and clear language was evaluated. For the <italic>delivery</italic>, maintaining of eye contact was rated as well as if enthusiasm for the topic was communicated. Rating of the <italic>conclusion</italic> comprised of preparation of the audience for the ending and if the central idea was reinforced. For <italic>overall evaluation</italic> the coders rated if the topic was challenging and narrowed and if the speech met the assignment. The coders rated every of the 13 categories per exposure, using a single close-up video of the participants delivering the speech for each condition respectively. The videos were distributed to the coders, therefore one video was rated by a single coder. For the general speech performance score, the means of the 13 ratings was calculated.</p>
<p>Social presence was measured by the questionnaire developed by Poeschl and Doering (<xref ref-type="bibr" rid="B38">2015</xref>), because it specifically covers presence aspects in virtual public speaking environments. The questionnaire consists of four five-point Likert scales (from 1&#x02009;&#x0003D;&#x02009;strongly disagree to 5&#x02009;&#x0003D;&#x02009;strongly agree) on sub-dimensions of SP (see Table <xref ref-type="table" rid="T1">1</xref>).</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p><bold>Reliability statistics for social presence (SP) subscales, fear of public speaking, and ability to concentrate questionnaires</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Reliability statistics</th>
<th valign="top" align="center">Cronbach&#x02019;s &#x003B1;</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">SP (Poeschl and Doering, <xref ref-type="bibr" rid="B38">2015</xref>)</td>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">&#x02003;Presenter&#x02019;s reaction to virtual agents</td>
<td align="center" valign="top">0.92</td>
</tr>
<tr>
<td align="left" valign="top">&#x02003;Perceived virtual agents&#x02019; reaction</td>
<td align="center" valign="top">0.85</td>
</tr>
<tr>
<td align="left" valign="top">&#x02003;Impression of interaction possibilities</td>
<td align="center" valign="top">0.90</td>
</tr>
<tr>
<td align="left" valign="top">&#x02003;(Co-)presence of other people</td>
<td align="center" valign="top">0.76</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">Personal report of confidence as a speaker&#x02014;short form (Hook et al., <xref ref-type="bibr" rid="B16">2008</xref>)</td>
<td align="center" valign="top">0.82</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><hr/></td>
</tr>
<tr>
<td align="left" valign="top">Ability to concentrate</td>
<td align="center" valign="top">0.87</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>State fear of public speaking was measured by an adapted short form of the Personal Report of Confidence as a Speaker (Hook et al., <xref ref-type="bibr" rid="B16">2008</xref>). Questions are answered in a true-false format (with the scores ranging from 0&#x02009;&#x0003D;&#x02009;no fear of public speaking to 12&#x02009;&#x0003D;&#x02009;highest level of fear of public speaking). Items were adapted to speech-giving in a VR environment.</p>
<p>For measuring ability to concentrate in VR, a 10-item questionnaire (six-point Likert scale from 1&#x02009;&#x0003D;&#x02009;not at all to 6&#x02009;&#x0003D;&#x02009;very much) was developed, based on the Wender Utah Rating Scale (Ward et al., <xref ref-type="bibr" rid="B56">1993</xref>), and adapted to a virtual public speaking scenario.</p>
<p>Reliability statistics of the measures are provided in Table <xref ref-type="table" rid="T1">1</xref>.</p>
</sec>
<sec id="S2-5">
<title>Study Environment</title>
<p>The hardware setup for the study consisted of a workstation that provided the virtual environment (VE). The VE was created on a DELL Workstation with an Intel(R) Xeon(R) CPU X5650 &#x00040; 2.67&#x02009;GHz, 12 GB of RAM, and a NVIDIA GeForce GTX 560 graphics card with 2 GB of RAM. The stereoscopic visualization was displayed with rear projection on screen (2,800&#x02009;mm&#x02009;&#x000D7;&#x02009;2,100&#x02009;mm) by two DLP projectors with a native SXGA&#x02009;&#x0002B;&#x02009;(1,400&#x02009;&#x000D7;&#x02009;1,050) resolution. The incorporated software setup was based on the CryEngine3 (Version PC v3.4.0 3696 freeSDK) as a 3D engine for real time rendering. The screen setup is presented in Figure <xref ref-type="fig" rid="F2">2</xref>. The visualization was projected on the middle screen.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p><bold>Screen setup for the study</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g002.tif"/>
</fig>
<p>The application was a prototype at the time of the study with a basic visualization of audience behavior. The virtual scene (5&#x02009;min&#x02019; length) was seen from a first person perspective and consisted of a male audience with eight members sitting in a lecture room (see Figure <xref ref-type="fig" rid="F3">3</xref>). The agents showed random behavior like leaning forward or talking to each other. Due to research economic reasons, no real interaction with the presenter was implemented (e.g., audience reactions to a boring style of presentation). A video of the visualization is provided as Supplementary Material. Further, audience behavior was specifically designed as neutral behavior (neither explicitly positive nor negative; Slater et al., <xref ref-type="bibr" rid="B49">2006</xref>) and treated as a constant. For the same reason, no head tracking was implemented (rendering was not updated to the participants&#x02019; eye point). Instead, participants were asked to stand in a sweet spot for the stereoscopic visualization.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p><bold>Screenshot of the virtual audience</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g003.tif"/>
</fig>
</sec>
<sec id="S2-6">
<title>Procedure</title>
<p>The experiment took place in June 2015. After an oral briefing, subjects completed a questionnaire on socio-demographic data. Afterward, they completed the two tasks. Subsequently, participants filled out questionnaires on SP, fear of public speaking, and ability to concentrate, before being debriefed. As the tasks were fulfilled in direct succession, the covariates were measured with regard to the whole experience in VR. This also served to keep strain on participants to a minimum, because it reduced the time for their participation in the study. In accordance to prior studies (see &#x0201C;<xref ref-type="sec" rid="S1">Introduction</xref>&#x0201D;), no training session in front of the virtual audience was conducted and participants had no prior experience with the system. This prevented unplanned habituation effects which would have had an influence on task difficulty.</p>
</sec>
</sec>
<sec id="S3">
<title>Results</title>
<p>In general, participants performed rather well in the low (first exposure) as well as the high task difficulty condition (second exposure). However, mean performance showed higher descriptive statistics for the low difficulty condition (see Figure <xref ref-type="fig" rid="F4">4</xref>). Fear of public speaking was also from low to medium, however, it showed a rather wide range (see Figure <xref ref-type="fig" rid="F5">5</xref>).</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p><bold>Boxplot of speech performance scores in the low and the high task difficulty condition</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g004.tif"/>
</fig>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p><bold>Boxplot of fear of public speaking scores</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g005.tif"/>
</fig>
<p>Social presence was medium for all sub-constructs, although co-presence of other people was a bit higher than the other dimensions (see Figure <xref ref-type="fig" rid="F6">6</xref>).</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p><bold>Boxplot of social presence scores</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g006.tif"/>
</fig>
<p>Finally, ability to concentrate in VR was rather low (see Figure <xref ref-type="fig" rid="F7">7</xref>), maybe due to the novel experience of using a VR public speaking system, and participants were rather excited about the study.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p><bold>Boxplot of ability to concentrate scores</bold>.</p></caption>
<graphic xlink:href="fict-04-00013-g007.tif"/>
</fig>
<p>As a first step for hypothesis testing, it was examined if the covariates (SP, fear of public speaking, and ability to concentrate) correlate with speech-giving performance in VR (Hypothesis 1). Bivariate Pearson correlations were computed (see Table <xref ref-type="table" rid="T2">2</xref>), while all requirements for the data analysis were met.</p>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p><bold>Intercorrelations between speech-giving performance for low and high task difficulty (experimental condition), SP dimensions, fear of public speaking, and ability to concentrate</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Measure</th>
<th valign="top" align="center">1</th>
<th valign="top" align="center">2</th>
<th valign="top" align="center">3</th>
<th valign="top" align="center">4</th>
<th valign="top" align="center">5</th>
<th valign="top" align="center">6</th>
<th valign="top" align="center">7</th>
<th valign="top" align="center">8</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">1. Speech-giving performance/low task difficulty</td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">2. Speech-giving performance/high task difficulty</td>
<td align="center" valign="top">0.598<xref ref-type="table-fn" rid="tfn2">&#x0002A;&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">3. SP: presenter&#x02019;s reaction to virtual agents</td>
<td align="center" valign="top">&#x02212;0.407<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.094</td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">4. SP: perceived virtual agents&#x02019; reaction</td>
<td align="center" valign="top">&#x02212;0.388<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.366<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">0.568<xref ref-type="table-fn" rid="tfn2">&#x0002A;&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">5. SP: impression of interaction possibilities</td>
<td align="center" valign="top">&#x02212;0.111</td>
<td align="center" valign="top">&#x02212;0.105</td>
<td align="center" valign="top">0.208</td>
<td align="center" valign="top">0.503<xref ref-type="table-fn" rid="tfn2">&#x0002A;&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">6. SP: (co-)presence of other people</td>
<td align="center" valign="top">&#x02212;0.054</td>
<td align="center" valign="top">&#x02212;0.174</td>
<td align="center" valign="top">0.070</td>
<td align="center" valign="top">0.233</td>
<td align="center" valign="top">0.421<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">7. Fear of public speaking (PRCS scale)</td>
<td align="center" valign="top">&#x02212;0.384<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.231</td>
<td align="center" valign="top">0.306</td>
<td align="center" valign="top">0.329<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">0.183</td>
<td align="center" valign="top">0.371<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top"/>
</tr>
<tr>
<td align="left" valign="top">8. Ability to concentrate</td>
<td align="center" valign="top">0.424<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">0.335<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.302</td>
<td align="center" valign="top">&#x02212;0.388<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.212</td>
<td align="center" valign="top">&#x02212;0.350<xref ref-type="table-fn" rid="tfn1">&#x0002A;</xref></td>
<td align="center" valign="top">&#x02212;0.795<xref ref-type="table-fn" rid="tfn2">&#x0002A;&#x0002A;</xref></td>
<td align="center" valign="top">&#x02013;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>N&#x02009;&#x0003D;&#x02009;36</italic>.</p>
<p><italic>SP, social presence</italic>.</p>
<fn id="tfn1"><p><italic>&#x0002A;p&#x02009;&#x0003C;&#x02009;0.05</italic>.</p></fn>
<fn id="tfn2"><p><italic>&#x0002A;&#x0002A;p&#x02009;&#x0003C;&#x02009;0.01</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>As Table <xref ref-type="table" rid="T2">2</xref> shows, SP dimensions do not correlate in general with speech-giving performance. Further, they show a tendency to correlate negatively. This is in line with state of research that states that especially for unfamiliar tasks, presence can have a negative effect on performance (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>). Speaking in front of a virtual audience could have been a novelty for participants.</p>
<p>Further, low and high task difficulty (first vs. second exposure) showed different patterns: for the low difficulty condition, only the presenter&#x02019;s reaction to the virtual agents as well as their reactions as perceived by the presenter revealed significant correlations, with medium effect sizes. For the high difficulty condition, only the perceived virtual agents&#x02019; reaction presented a medium effect (see Table <xref ref-type="table" rid="T2">2</xref>). It seems that in the easy condition, where participants could prepare the speech, they might have had enough cognitive resources left to acknowledge their own reactions toward the audience. For both conditions, subjects seem to consider if the audience reacted toward them. This is also in line with previous research, because part of a public speaking scenario is anticipated human evaluation by the audience (Ling et al., <xref ref-type="bibr" rid="B23">2014</xref>). The impression of interaction possibilities as well as co-presence only show small and insignificant effects. Giving a frontal speech is not a very interactive task (a discussion, for example, was not simulated), therefore, the respective presence dimension might have not played an important role in the study. Concerning co-presence, it could be assumed that participants were aware that the virtual agents were not other human beings, as the agents in the visualization were clearly models. Further, maybe participants&#x02019; related co-presence to the experimenters. Those retreated during the speeches, but did not leave the room due to the quick sequence of tasks to be administered to the subjects.</p>
<p>In line with the theoretical background, fear of public speaking showed a significant medium negative correlation with performance for the easy task and a negative tendency for the hard task (see Table <xref ref-type="table" rid="T2">2</xref>). The smaller effect for the hard task could be explained by the procedure sequence of the experiment, as the hard task was implemented in the second exposure: maybe the participants got used to the environment and the task. State of research shows that within CBT, fear decreases over time during an exposure (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B59">2005a</xref>).</p>
<p>Finally, and unsurprisingly, higher ability to concentrate in VR showed positive and medium correlations with performance for both task conditions (see Table <xref ref-type="table" rid="T2">2</xref>). In light of the complex intercorrelation patterns, Hypothesis 1 could only be partially confirmed, i.e., for the dimensions showing significant effects.</p>
<p>For testing Hypothesis 2, a repeated measures ANOVA was conducted (requirements for data analysis were fulfilled). Task difficulty revealed a large effect on speech-giving performance [<italic>F</italic>(1, 35)&#x02009;&#x0003D;&#x02009;8.55; <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.006; <inline-formula><mml:math id="M1"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mtext>2</mml:mtext></mml:msubsup><mml:mo>=</mml:mo><mml:mn>0.20</mml:mn></mml:mrow></mml:math></inline-formula>]; high difficulty (M&#x02009;&#x0003D;&#x02009;2.13, SD&#x02009;&#x0003D;&#x02009;0.41) resulted in lower speech performance scores than low difficulty (M&#x02009;&#x0003D;&#x02009;2.30, SD&#x02009;&#x0003D;&#x02009;0.35). However, learning effects from the low difficulty condition (first exposure) could be possible. Still, a learning effect would probably lead to better speech performance in the second (high difficulty) condition. Therefore, the effect of task difficulty could be even larger. Although this seems to still support Hypothesis 2, a final statement concerning task difficulty effects cannot be derived due to the chosen study design.</p>
<p>In order to test Hypothesis 3, an ANCOVA with the SP dimensions that showed significant correlations with speech performance, fear of public speaking, and ability to concentrate, was conducted (Table <xref ref-type="table" rid="T3">3</xref>). Requirements for data analysis were fulfilled.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p><bold>Analysis of covariance of public speaking performance as a function of task difficulty with SP dimensions, fear of public speaking, and ability to concentrate as covariates</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Source</th>
<th valign="top" align="center"><italic>Df</italic></th>
<th valign="top" align="center"><italic>SS</italic></th>
<th valign="top" align="center"><italic>MS</italic></th>
<th valign="top" align="center"><italic>F</italic></th>
<th valign="top" align="center"><italic>p</italic></th>
<th valign="top" align="center"><inline-formula><mml:math id="M2"><mml:mrow><mml:msubsup><mml:mi>&#x003B7;</mml:mi><mml:mi>p</mml:mi><mml:mn>2</mml:mn></mml:msubsup></mml:mrow></mml:math></inline-formula></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">SP: presenter&#x02019;s reaction to virtual agents (C)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.30</td>
<td align="center" valign="top">0.30</td>
<td align="center" valign="top">5.49</td>
<td align="center" valign="top">0.026</td>
<td align="center" valign="top">0.16</td>
</tr>
<tr>
<td align="left" valign="top">SP: perceived virtual agents&#x02019; reaction (C)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.12</td>
<td align="center" valign="top">0.12</td>
<td align="center" valign="top">2.26</td>
<td align="center" valign="top">0.143</td>
<td align="center" valign="top">0.07</td>
</tr>
<tr>
<td align="left" valign="top">Fear of public speaking (PRCS scale) (C)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.02</td>
<td align="center" valign="top">0.02</td>
<td align="center" valign="top">0.30</td>
<td align="center" valign="top">0.590</td>
<td align="center" valign="top">0.01</td>
</tr>
<tr>
<td align="left" valign="top">Ability to concentrate (C)</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.01</td>
<td align="center" valign="top">0.01</td>
<td align="center" valign="top">0.19</td>
<td align="center" valign="top">0.668</td>
<td align="center" valign="top">0.01</td>
</tr>
<tr>
<td align="left" valign="top">Task difficulty</td>
<td align="center" valign="top">1</td>
<td align="center" valign="top">0.07</td>
<td align="center" valign="top">0.07</td>
<td align="center" valign="top">1.33</td>
<td align="center" valign="top">0.258</td>
<td align="center" valign="top">0.04</td>
</tr>
<tr>
<td align="left" valign="top">Error</td>
<td align="center" valign="top">30</td>
<td align="center" valign="top">1.63</td>
<td align="center" valign="top">0.06</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>SP, social presence; C, covariate</italic>.</p>
</table-wrap-foot>
</table-wrap>
<p>The introduction of the covariates decreased the effect of task difficulty from small to medium, and it turned insignificant. There seems to be an overlap between task difficulty and SP for explained variance for performance (see Table <xref ref-type="table" rid="T3">3</xref>). The presenter&#x02019;s reaction to the virtual agents in the application seems to be especially relevant (it shows a medium effect size). This SP factor is constituted of items stating that the audience behavior influenced the presenters&#x02019; style of presentation and had an influence on their mood, as well as that the presenters reacted to the people in the audience, and that they got distracted by them (Poeschl and Doering, <xref ref-type="bibr" rid="B38">2015</xref>). Reacting to the virtual audience could have afforded cognitive resources that could not at the same time be allocated to the task at hand. Therefore, this could have decreased performance.</p>
<p>However, due to the non-randomized presentation of the tasks (the low difficulty task in the first exposure, the high difficulty task in the second exposure), learning effects cannot be ruled out. Further, the covariates showed complex correlation patterns with speech performance (see Table <xref ref-type="table" rid="T2">2</xref>). Therefore, conclusions on Hypothesis 3 cannot be drawn on the basis of this study. Although the ANCOVA revealed an overlap of explained variance of the covariates and task difficulty, there is no indication whether this overlap is partialed out from explained variance of either the low or high task difficulty condition.</p>
<p>In order to gain more insight into possible influences of SP, fear of public speaking, and ability to concentrate on the difference of speech performance between the task difficulty conditions, the data were further analyzed in an explorative way.</p>
<p>Due to the possibility of practice effects in the given design, the difference between the two conditions is of interest, when the effect of the low task difficulty condition is partialed out of the high task difficulty condition. A new dependent variable was calculated from the standardized residuals that were determined by means of a linear regression analysis of the speech performance in the low difficulty condition (predictor) on the speech performance in the high difficulty condition (criterion). This ensured that the new dependent variable represented the difference in speech performance between the task conditions that is independent of the performance in the easy task [for this procedure, see also Schumann and Schultheiss (<xref ref-type="bibr" rid="B45">2009</xref>)].</p>
<p>A linear multiple regression analysis was conducted to predict the residual performance difference (criterion) based on presenter&#x02019;s reaction to virtual agents, perceived virtual agents&#x02019; reaction (both SP dimensions), fear of public speaking, and ability to concentrate (predictors). The requirements for this analysis were met. The descriptive statistics for the regression analysis are presented in Table <xref ref-type="table" rid="T4">4</xref>, and the summary of the regression analysis in Table <xref ref-type="table" rid="T5">5</xref>.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p><bold>Means, SDs, and intercorrelations for residual difference of speech performance between task difficulty conditions and SP, fear of public speaking, and ability to concentrate predictor variables</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Variable</th>
<th valign="top" align="center"><italic>M</italic></th>
<th valign="top" align="center"><italic>SD</italic></th>
<th valign="top" align="center">1</th>
<th valign="top" align="center">2</th>
<th valign="top" align="center">3</th>
<th valign="top" align="center">4</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Residual difference of speech performance</td>
<td align="center" valign="top">0.04</td>
<td align="center" valign="top">0.97</td>
<td align="center" valign="top">0.192</td>
<td align="center" valign="top">&#x02212;0.192</td>
<td align="center" valign="top">&#x02212;0.039</td>
<td align="center" valign="top">&#x02212;0.124</td>
</tr>
<tr>
<td align="left" valign="top" colspan="7"><bold>Predictor variable</bold></td>
</tr>
<tr>
<td align="left" valign="top">1. SP: presenter&#x02019;s reaction to virtual agents</td>
<td align="center" valign="top">2.32</td>
<td align="center" valign="top">1.17</td>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top">0.568<xref ref-type="table-fn" rid="tfn5">&#x0002A;&#x0002A;&#x0002A;</xref></td>
<td align="center" valign="top">0.306<xref ref-type="table-fn" rid="tfn3">&#x0002A;</xref></td>
<td align="center" valign="top">0.307<xref ref-type="table-fn" rid="tfn3">&#x0002A;</xref></td>
</tr>
<tr>
<td align="left" valign="top">2. SP: perceived virtual agents&#x02019; reaction</td>
<td align="center" valign="top">2.46</td>
<td align="center" valign="top">1.01</td>
<td align="center" valign="top"/>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top">0.322<xref ref-type="table-fn" rid="tfn3">&#x0002A;</xref></td>
<td align="center" valign="top">0.416<xref ref-type="table-fn" rid="tfn4">&#x0002A;&#x0002A;</xref></td>
</tr>
<tr>
<td align="left" valign="top">3. Fear of public speaking (PRCS scale)</td>
<td align="center" valign="top">4.89</td>
<td align="center" valign="top">3.24</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top">&#x02013;</td>
<td align="center" valign="top">0.829<xref ref-type="table-fn" rid="tfn5">&#x0002A;&#x0002A;&#x0002A;</xref></td>
</tr>
<tr>
<td align="left" valign="top">4. Ability to concentrate</td>
<td align="center" valign="top">2.32</td>
<td align="center" valign="top">1.05</td>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top"/>
<td align="center" valign="top">&#x02013;</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>N&#x02009;&#x0003D;&#x02009;35</italic>.</p>
<p><italic>SP, social presence</italic>.</p>
<fn id="tfn3"><p><italic>&#x0002A;p&#x02009;&#x0003C;&#x02009;0.05</italic>.</p></fn>
<fn id="tfn4"><p><italic>&#x0002A;&#x0002A;p&#x02009;&#x0003C;&#x02009;0.01</italic>.</p></fn>
<fn id="tfn5"><p><italic>&#x0002A;&#x0002A;&#x0002A;p&#x02009;&#x0003C;&#x02009;0.001</italic>.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p><bold>Regression analysis summary for social presence (SP), fear of public speaking, and ability to concentrate predicting residual difference of speech performance between task difficulty conditions</bold>.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left">Variable</th>
<th valign="top" align="center"><italic>B</italic></th>
<th valign="top" align="center"><italic>SE B</italic></th>
<th valign="top" align="center">&#x003B2;</th>
<th valign="top" align="center"><italic>t</italic></th>
<th valign="top" align="center"><italic>p</italic></th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="top">Constant</td>
<td align="center" valign="top">0.36</td>
<td align="center" valign="top">0.49</td>
<td align="center" valign="top"/>
<td align="center" valign="top">0.74</td>
<td align="center" valign="top">0.463</td>
</tr>
<tr>
<td align="left" valign="top">1. SP: presenter&#x02019;s reaction to virtual agents</td>
<td align="center" valign="top">0.37</td>
<td align="center" valign="top">0.17</td>
<td align="center" valign="top">0.45</td>
<td align="center" valign="top">2.18</td>
<td align="center" valign="top">0.037</td>
</tr>
<tr>
<td align="left" valign="top">2. SP: perceived virtual agents&#x02019; reaction</td>
<td align="center" valign="top">&#x02212;0.39</td>
<td align="center" valign="top">0.20</td>
<td align="center" valign="top">&#x02212;0.40</td>
<td align="center" valign="top">&#x02212;1.90</td>
<td align="center" valign="top">0.067</td>
</tr>
<tr>
<td align="left" valign="top">3. Fear of public speaking (PRCS scale)</td>
<td align="center" valign="top">0.03</td>
<td align="center" valign="top">0.09</td>
<td align="center" valign="top">0.10</td>
<td align="center" valign="top">0.33</td>
<td align="center" valign="top">0.741</td>
</tr>
<tr>
<td align="left" valign="top">4. Ability to concentrate</td>
<td align="center" valign="top">&#x02212;0.16</td>
<td align="center" valign="top">0.28</td>
<td align="center" valign="top">0.18</td>
<td align="center" valign="top">&#x02212;0.57</td>
<td align="center" valign="top">0.576</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>A non-significant regression equation was found [<italic>F</italic>(4, 30)&#x02009;&#x0003D;&#x02009;1.65, <italic>p</italic>&#x02009;&#x0003D;&#x02009;0.187], with an <italic>R<sup>2</sup></italic> of 0.181, which represents a medium effect. However, test power was too low (1&#x02009;&#x02212;&#x02009;&#x003B2;&#x02009;&#x0003D;&#x02009;0.51) due to the small sample size. Still, the presenter&#x02019;s reaction to virtual agents was a significant predictor of residual difference of speech performance between task difficulty conditions. This analysis shows a similar pattern as the ANCOVA results. It seems that only the presenter&#x02019;s reaction to virtual agents as a SP dimensions explains variance of speech performance after controlling for learning effects due to the prior speech performance in the easy task. The other covariates do not contribute significantly to the regression equation. However, this result can only be interpreted as a tendency due to the non-significant effect of the whole regression model.</p>
</sec>
<sec id="S4" sec-type="discussion">
<title>Discussion</title>
<p>Although VR training applications and VR public speaking applications in particular are a successful asset, comprehensive studies that analyze what determinants contribute to this success are still scarce today. This study introduces the QUEST-VR framework that can be used as a heuristic tool to evaluate interactive VR setups. The framework includes <italic>system</italic> and <italic>user characteristics</italic> as well as the <italic>system-user interaction</italic> and <italic>moderating factors</italic> as determinants of a VR application&#x02019;s <italic>quality</italic>. A first partial validation of the framework was implemented by evaluating the quality of a VR public speaking training application. A within-subject laboratory study was conducted. The influence of task difficulty (system factor), ability to concentrate (user state), state fear of public speaking, and SP (moderating factors) on public speaking performance in VR was analyzed.</p>
<p>Concerning Hypothesis 1, intercorrelations of ability to concentrate, fear of public speaking, and SP with speech-giving performance in VR were examined. They revealed complex patterns: in line with state of research, ability to concentrate showed positive and medium correlations with speech-giving performance independently for task difficulty conditions.</p>
<p>However, task difficulty showed different patterns for the other covariates. Fear of public speaking correlated negatively with performance and with a medium effect for the easy task, but for the hard task only showed a negative tendency. However, this can be explained with a sequence effect, as the hard task followed the easy task and fear decreases over time (Wiederhold and Wiederhold, <xref ref-type="bibr" rid="B59">2005a</xref>). Controlling for a sequence effect was not feasible as it would have made the easy task even easier if it had followed the hard task due to practice effects.</p>
<p>Social presence dimensions also revealed interesting patterns. Impression of interaction possibilities and co-presence as SP dimensions showed very small and insignificant effect sizes. Participants were probably fully aware that the audience did not consist of real people and the VR application was not interactive (for example, no questions and answers were implemented). Still, SP dimensions reflecting reactions of the presenters toward the audience and <italic>vice versa</italic> appear to be relevant: for the easy as well as the hard task, subjects seemed to consider how the audience reacted toward them with medium effect sizes. This can be explained by the fact that anticipated human evaluation is an integral part of public speaking tasks (Ling et al., <xref ref-type="bibr" rid="B23">2014</xref>). For the low difficulty condition, the presenters&#x02019; reaction to the audience correlated with speech performance, again with a medium effect. Participants had the opportunity to prepare the speech and might, therefore, have had enough cognitive resources left to notice their own reactions. In general, SP dimensions showed negative correlations with speech performance, which is common with unfamiliar tasks (Nash et al., <xref ref-type="bibr" rid="B29">2000</xref>). Using a new VR training environment probably presented a novelty for the participants.</p>
<p>Therefore, the findings supported Hypothesis 1 only for ability to concentrate in VR and the SP dimension that covers how presenters experience the audience&#x02019;s reaction toward them. The complex pattern of correlations hints at underlying interrelations that are just as complex and highlight the need for comprehensive approaches when evaluating VR applications. However, as the tasks were administered directly after another without a break, ability to concentrate, fear of public speaking, and SP were measured with regard to the whole experience. Measuring the covariates after each task (directly related to the task difficulty condition) could show different effects. Therefore, generalizability of the effects is clearly limited. The findings should be replicated with a different research design.</p>
<p>In the next step, the effect of task difficulty on speech-giving performance in VR was examined. Unsurprisingly, high task difficulty led to lower performance than low task difficulty with a large effect. However, as the sequence of tasks was not randomized, a learning effect between the tasks cannot be ruled out. It should lead to better speech performance in the second (high difficulty) condition. Therefore, the effect of task difficulty could be even larger. Due to the study design, a final statement concerning Hypothesis 2 cannot be given. However, including ability to concentrate, fear of public speaking, and SP dimensions (these that correlated with performance) as covariates in the analysis led to a reduction of the effect of task difficulty, considerably reducing its effect size to a small to medium effect. The presenter&#x02019;s reaction to the virtual agents in the application (as SP dimension) seems to have a significant contribution, showing a medium effect size. This factor shows an overlap of explained variance by task difficulty on performance. The SP factor concerned covers that speakers feel that the audience influences their mood, style of presentation and even distracts them (Poeschl and Doering, <xref ref-type="bibr" rid="B38">2015</xref>). These reactions could have blocked cognitive resources that would have otherwise been allocated to the task and then led to higher performance.</p>
<p>Still, due to possible sequence effects and the complex intercorrelations of covariates with the speech performance, Hypothesis 3 cannot be confirmed on the basis of this study. In order to gain more insight into the interrelations, further exploratory analyses were conducted. Possible learning effects between the task difficulty conditions were taken into account. A new dependent variable was calculated from the standardized residuals that were determined by means of a linear regression analysis of the speech performance in the low difficulty condition on the speech performance in the high difficulty condition. This variable represented the difference in speech performance between the task conditions that is independent of the performance in the easy task. A linear multiple regression analysis was conducted to predict the residual performance difference based on the covariates included in the ANCOVA. Although the presenter&#x02019;s reaction to virtual agents (SP dimension) was a significant predictor (showing a similar effect as obtained in the ANCOVA), the regression model was insignificant. However, variance explained of the regression model revealed a medium effect. The test power was too low, possibly because of the small sample size. Therefore, the impact of this specific SP dimension of speech performance can only be interpreted as a tendency.</p>
<p>The hypotheses in this study were only supported in parts. The interplay of determinants that was shown in the results should be explored more thoroughly. Especially the effects of the covariates on performance and their respective interrelations should be analyzed in future studies with more rigorous designs. This will help to gain a better understanding on causal relationships on what exactly determines an application&#x02019;s quality and thereby its success.</p>
<p>This study has several limitations. First, the sample consisted of undergraduate students with a majority of women. Although the VR training application targets students, other target groups (e.g., lecturers, politicians, and business people) would also be possible. However, the findings cannot be generalized toward these groups or even other use cases like job interviews.</p>
<p>Second, several design aspects should be improved in future studies. The covariates were measured with regard to the whole experience of delivering speeches in VR and not for the single task conditions, respectively, as the tasks were fulfilled in direct succession. This had the benefit of reducing strain on participants by keeping the time for their participation in the study short. However, measuring the covariates explicitly for each condition could lead to different and more reliable results. Also, sequence effects of the task conditions could not be controlled: fulfilling the hard task before the easy task could have led to practice effects and, therefore, further lowered difficulty for the easy task. Still, future studies should use designs that avoid entanglement of task difficulty and sequence. However, the dependent variable speech performance was measured by means of a behavior observation system. Therefore, the study combined objective and subjective measures.</p>
<p>Third, the application is still a prototype. It only included male virtual agents and a very limited number of displayed non-verbal behavior actions. Further, no real interaction between presenters and the audience (like questions and answers) and no head tracking was implemented. Therefore, realism of the scenario was clearly limited. This could lead to lower presence experienced and higher performance scores, if participants did not take the audience seriously. A more realistic visualization (e.g., based on video data of audience members) should reveal different and maybe larger effects. Additionally, the prototype did not include a self-avatar. Recent research shows that including self-avatars in VR could be very beneficial for public speaking training systems. First, gesturing seems to lighten cognitive load in explanation tasks (Goldin-Meadow et al., <xref ref-type="bibr" rid="B15">2001</xref>), which are very similar to public speaking. Loading off mental tasks by gestures during speaking tasks, therefore, has an impact on task difficulty. A recent study showed that implementing an active self-avatar and allowing gestures in VR during a recall task significantly increased performance compared to no self-avatar and no gestures allowed (Steed et al., <xref ref-type="bibr" rid="B51">2016</xref>). Second, people with communication anxiety might prefer to &#x0201C;become someone else&#x0201D; in a public speaking situation. Aymerich-Franch et al. (<xref ref-type="bibr" rid="B2">2014</xref>) could show on the one hand that social anxiety correlated significantly with a preference for embodying a dissimilar avatar in VR. On the other hand, participants with an assigned self-avatar experienced more self-presence and higher levels of anxiety. In order to reduce anxiety, which can represent an important inhibition threshold for training sessions in VR, clients could be offered a choice of avatars for initial training, including dissimilar avatars. In further sessions, the self-avatar could be gradually adapted to a realistic self-avatar with models based on photographs of the participants&#x02019; faces (Aymerich-Franch et al., <xref ref-type="bibr" rid="B2">2014</xref>). In this way, self-presence could be gradually increased while being matched to the clients&#x02019; training progress and their decrease in anxiety until conditions similar to real public speaking scenarios are reached. This procedure would be comparable to confronting increasingly frightening stimuli in conventional CBT.</p>
<p>Lastly, only a small selection of determinants could be acknowledged in the present study due to research economic reasons. For example, different technological setups (head mounted display, desktop, and projection screens) were not compared in the study. It would be interesting to learn if this would lead to the same effects and if a specific setup would prove to be the most efficient. Further, other user factors like prior experience with VR or motivation to interact with VR could have an impact on performance and could not be considered in this study. Last but not least, only a single training session (containing two tasks) was conducted. Training programs usually consist of several sessions and task difficulty can be adapted to the trainee&#x02019;s progress. Evaluating the application&#x02019;s quality in a whole training program still needs to be done.</p>
<p>However, using the QUEST-VR framework as a tool to derive variables for evaluation of a VR public speaking training application proved feasible and fruitful. The comprehensive analysis of system features, user factors and moderating variables on speech performance revealed interesting and complex patterns of findings that can serve as a basis for future studies. Still, the feasibility of the framework as a heuristic tool for evaluation should also be tested for VR applications with a different context, for example other phobias. Using comprehensive studies will not only increase the understanding of human-computer interaction in VR, but can also help to improve an application&#x02019;s quality and successful implementation.</p>
</sec>
<sec id="S5">
<title>Ethics Statement</title>
<p>As the university the study was conducted at does not have an ethics board, the study was designed, implemented, and conducted according to the guidelines of the APA research ethics committee. Upon arriving at the lab, the participants were briefed orally and signed a consent form that included all the following points in writing: participants were informed that the study was voluntary and all data were collected confidentially and processed anonymously. They were informed that they could withdraw their consent on any given point in time without negative consequences. They were further informed that they would be video-recorded during the experiment, but all data would be erased at their request. No vulnerable populations were involved.</p>
</sec>
<sec id="S6" sec-type="author-contributor">
<title>Author Contributions</title>
<p>SP: the author&#x02019;s contributions to the paper are developing the theoretical background and state of research for the study and the paper, study design, data analysis, and interpretation for the study as well as writing and revising the manuscript. The QUEST-VR framework was developed in collaboration with Doug A. Bowman, Virginia Polytechnic Institute and State University. A publication of the framework in itself is in preparation (see text footnote 1). Doug A. Bowman has agreed that the author uses and presents the framework in this paper.</p>
</sec>
<sec id="S7">
<title>Conflict of Interest Statement</title>
<p>The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>The author would like to thank to Annika Mann and Darja Schuetz for their help with the preparation of the experiment and the data collection. I acknowledge support for the Article Processing Charge by the German Research Foundation and the Open Access Publication Fund of the Technische Universit&#x000E4;t Ilmenau.</p>
</ack>
<fn-group>
<fn fn-type="financial-disclosure">
<p><bold>Funding.</bold> The research described in the paper was not funded.</p></fn>
</fn-group>
<sec id="S9" sec-type="supplementary-material">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at <uri xlink:href="http://journal.frontiersin.org/article/10.3389/fict.2017.00013/full&#x00023;supplementary-material">http://journal.frontiersin.org/article/10.3389/fict.2017.00013/full&#x00023;supplementary-material</uri>.</p>
<supplementary-material xlink:href="video_1.mp4" id="SM1" mimetype="applicationn/mp4" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Video</label>
<caption><p><bold>Visualization of the training application</bold>.</p></caption>
</supplementary-material>
<supplementary-material xlink:href="data_sheet_1.docx" id="SM2" mimetype="applicationn/docx" xmlns:xlink="http://www.w3.org/1999/xlink">
<label>Data Sheet</label>
<caption><p><bold>Material handed out to participants</bold>.</p></caption>
</supplementary-material>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Anton</surname> <given-names>R.</given-names></name> <name><surname>Opris</surname> <given-names>D.</given-names></name> <name><surname>Dobrean</surname> <given-names>A.</given-names></name> <name><surname>David</surname> <given-names>D.</given-names></name> <name><surname>Rizzo</surname> <given-names>A.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Virtual reality in rehabilitation of attention deficit/hyperactivity disorder. The instrument construction principles,&#x0201D;</article-title> in <conf-name>Proceedings of the 2009 Virtual Rehabilitation International Conference</conf-name>, <conf-loc>Haifa</conf-loc>.</citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aymerich-Franch</surname> <given-names>L.</given-names></name> <name><surname>Kizilcec</surname> <given-names>R. F.</given-names></name> <name><surname>Bailenson</surname> <given-names>J. N.</given-names></name></person-group> (<year>2014</year>). <article-title>The relationship between virtual self similarity and social anxiety</article-title>. <source>Front. Hum. Neurosci.</source> <volume>8</volume>:<fpage>944</fpage>.<pub-id pub-id-type="doi">10.3389/fnhum.2014.00944</pub-id><pub-id pub-id-type="pmid">25477810</pub-id></citation></ref>
<ref id="B3"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Biocca</surname> <given-names>F. A.</given-names></name> <name><surname>Harms</surname> <given-names>C.</given-names></name> <name><surname>Gregg</surname> <given-names>J.</given-names></name></person-group> (<year>2001</year>). <article-title>&#x0201C;The networked minds measure of social presence: pilot test of the factor structure and concurrent validity,&#x0201D;</article-title> in <conf-name>Proceedings of the 4th Annual International Workshop on Presence</conf-name>, (<conf-loc>Valencia</conf-loc>: <conf-sponsor>International Society for Presence Research</conf-sponsor>).</citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bowman</surname> <given-names>D. A.</given-names></name> <name><surname>Coquillart</surname> <given-names>S.</given-names></name> <name><surname>Fr&#x000F6;hlich</surname> <given-names>B.</given-names></name> <name><surname>Hirose</surname> <given-names>M.</given-names></name> <name><surname>Kitamura</surname> <given-names>Y.</given-names></name> <name><surname>Kiyokawa</surname> <given-names>K.</given-names></name> <etal/></person-group> (<year>2008</year>). <article-title>3D user interfaces: new directions and perspectives</article-title>. <source>IEEE Comput. Graph. Appl.</source> <volume>28</volume>, <fpage>20</fpage>&#x02013;<lpage>36</lpage>.<pub-id pub-id-type="doi">10.1109/MCG.2008.109</pub-id><pub-id pub-id-type="pmid">19004682</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bowman</surname> <given-names>D. A.</given-names></name> <name><surname>Hodges</surname> <given-names>L. F.</given-names></name></person-group> (<year>1999</year>). <article-title>Formalizing the design, evaluation, and application of interaction techniques for immersive virtual environments</article-title>. <source>J. Vis. Lang. Comput.</source> <volume>10</volume>, <fpage>37</fpage>&#x02013;<lpage>53</lpage>.<pub-id pub-id-type="doi">10.1006/jvlc.1998.0111</pub-id></citation></ref>
<ref id="B6"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Bowman</surname> <given-names>D. A.</given-names></name> <name><surname>Kruijff</surname> <given-names>E.</given-names></name> <name><surname>LaViola</surname> <given-names>J. J.</given-names></name> <name><surname>Poupyrev</surname> <given-names>I.</given-names></name></person-group> (<year>2005</year>). <source>3D User Interfaces: Theory and Practice</source>. <publisher-loc>Boston</publisher-loc>: <publisher-name>Addison-Wesley Professional</publisher-name>.</citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bowman</surname> <given-names>D. A.</given-names></name> <name><surname>McMahan</surname> <given-names>R. P.</given-names></name></person-group> (<year>2007</year>). <article-title>Virtual reality: how much immersion is enough?</article-title> <source>Computer</source> <volume>40</volume>, <fpage>36</fpage>&#x02013;<lpage>43</lpage>.<pub-id pub-id-type="doi">10.1109/MC.2007.257</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chapanis</surname> <given-names>A.</given-names></name></person-group> (<year>1991</year>). <article-title>To communicate the human factors message, you have to know what the message is and how to communicate it</article-title>. <source>Hum. Factors Soc. Bull.</source> <volume>34</volume>, <fpage>1</fpage>&#x02013;<lpage>4</lpage>.</citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chaplin</surname> <given-names>W. F.</given-names></name> <name><surname>John</surname> <given-names>O. P.</given-names></name> <name><surname>Goldberg</surname> <given-names>L. R.</given-names></name></person-group> (<year>1988</year>). <article-title>Conceptions of states and traits: dimensional attributes with ideals as prototypes</article-title>. <source>J. Pers. Soc. Psychol.</source> <volume>54</volume>, <fpage>541</fpage>&#x02013;<lpage>557</lpage>.<pub-id pub-id-type="doi">10.1037/0022-3514.54.4.541</pub-id><pub-id pub-id-type="pmid">3367279</pub-id></citation></ref>
<ref id="B10"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Cho</surname> <given-names>B. H.</given-names></name> <name><surname>Lee</surname> <given-names>J. M.</given-names></name> <name><surname>Ku</surname> <given-names>J. H.</given-names></name> <name><surname>Jang</surname> <given-names>D. P.</given-names></name> <name><surname>Kim</surname> <given-names>J. S.</given-names></name> <name><surname>Kim</surname> <given-names>I. Y.</given-names></name> <etal/></person-group> (<year>2002</year>). <article-title>&#x0201C;Attention enhancement system using virtual reality and EEG biofeedback,&#x0201D;</article-title> in <conf-name>Proceedings of the 2002 Virtual Reality Conference</conf-name>, (<conf-loc>Orlando, FL</conf-loc>).</citation></ref>
<ref id="B11"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Chollet</surname> <given-names>M.</given-names></name> <name><surname>W&#x000F6;rtwein</surname> <given-names>T.</given-names></name> <name><surname>Morency</surname> <given-names>L.-P.</given-names></name> <name><surname>Shapiro</surname> <given-names>A.</given-names></name> <name><surname>Scherer</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Exploring feedback strategies to improve public speaking,&#x0201D;</article-title> in <conf-name>The 2015 ACM International Joint Conference</conf-name>, eds <person-group person-group-type="editor"><name><surname>Mase</surname> <given-names>K.</given-names></name> <name><surname>Langheinrich</surname> <given-names>M.</given-names></name> <name><surname>Gatica-Perez</surname> <given-names>D.</given-names></name> <name><surname>Gellersen</surname> <given-names>H.</given-names></name> <name><surname>Choudhury</surname> <given-names>T.</given-names></name> <name><surname>Yatani</surname> <given-names>K.</given-names></name></person-group>, (<conf-loc>Osaka</conf-loc>), <fpage>1143</fpage>&#x02013;<lpage>1154</lpage>.</citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Draper</surname> <given-names>V. D.</given-names></name> <name><surname>Kaber</surname> <given-names>D. B.</given-names></name> <name><surname>Usher</surname> <given-names>J. M.</given-names></name></person-group> (<year>1998</year>). <article-title>Telepresence</article-title>. <source>Hum. Factors</source> <volume>40</volume>, <fpage>354</fpage>&#x02013;<lpage>375</lpage>.<pub-id pub-id-type="doi">10.1518/001872098779591386</pub-id><pub-id pub-id-type="pmid">9849099</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Endler</surname> <given-names>N. S.</given-names></name> <name><surname>Kocovski</surname> <given-names>N. L.</given-names></name></person-group> (<year>2001</year>). <article-title>State and trait anxiety revisited</article-title>. <source>J. Anxiety Disord.</source> <volume>15</volume>, <fpage>231</fpage>&#x02013;<lpage>245</lpage>.<pub-id pub-id-type="doi">10.1016/S0887-6185(01)00060-3</pub-id><pub-id pub-id-type="pmid">11442141</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Felnhofer</surname> <given-names>A.</given-names></name> <name><surname>Kothgassner</surname> <given-names>O. D.</given-names></name> <name><surname>Hetterle</surname> <given-names>T.</given-names></name> <name><surname>Beutl</surname> <given-names>L.</given-names></name> <name><surname>Hlavacs</surname> <given-names>H.</given-names></name> <name><surname>Kryspin-Exner</surname> <given-names>I.</given-names></name></person-group> (<year>2014</year>). <article-title>Afraid to be there? Evaluating the relation between presence, self-reported anxiety, and heart rate in a virtual public speaking task: cyberpsychology, behavior, and social networking</article-title>. <source>Cyberpsychol. Behav. Soc. Netw.</source> <volume>17</volume>, <fpage>310</fpage>&#x02013;<lpage>316</lpage>.<pub-id pub-id-type="doi">10.1089/cyber.2013.0472</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goldin-Meadow</surname> <given-names>S.</given-names></name> <name><surname>Nusbaum</surname> <given-names>H.</given-names></name> <name><surname>Kelly</surname> <given-names>S. D.</given-names></name> <name><surname>Wagner</surname> <given-names>S.</given-names></name></person-group> (<year>2001</year>). <article-title>Explaining math: gesturing lightens the load</article-title>. <source>Psychol. Sci.</source> <volume>12</volume>, <fpage>516</fpage>&#x02013;<lpage>522</lpage>.<pub-id pub-id-type="doi">10.1111/1467-9280.00395</pub-id><pub-id pub-id-type="pmid">11760141</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hook</surname> <given-names>J. N.</given-names></name> <name><surname>Smith</surname> <given-names>C. A.</given-names></name> <name><surname>Valentiner</surname> <given-names>D. P.</given-names></name></person-group> (<year>2008</year>). <article-title>A short-form of the personal report of confidence as a speaker</article-title>. <source>Pers. Individ. Dif.</source> <volume>44</volume>, <fpage>1306</fpage>&#x02013;<lpage>1313</lpage>.<pub-id pub-id-type="doi">10.1016/j.paid.2007.11.021</pub-id></citation></ref>
<ref id="B17"><citation citation-type="book"><collab>International Organization for Standardization</collab>. (<year>1998</year>). <source>Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) &#x02013; Part 11: Guidance on Usability 13.180; 35.180. ISO 9241-11:1998</source>. <publisher-name>International Organization for Standardization</publisher-name>. Available at: <uri xlink:href="http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber&#x0003D;16883">http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber&#x0003D;16883</uri></citation></ref>
<ref id="B18"><citation citation-type="book"><collab>International Organization for Standardization</collab>. (<year>2015</year>). <source>Quality Management Systems &#x02013; Fundamentals and Vocabulary 9000:2015. EN ISO 9000:2015-11</source>. <publisher-loc>Berlin</publisher-loc>: <publisher-name>Beuth Verlag</publisher-name>.</citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kothgassner</surname> <given-names>O. D.</given-names></name> <name><surname>Felnhofer</surname> <given-names>A.</given-names></name> <name><surname>Beutl</surname> <given-names>L.</given-names></name> <name><surname>Hlavacs</surname> <given-names>H.</given-names></name> <name><surname>Lehenbauer</surname> <given-names>M.</given-names></name> <name><surname>Stetina</surname> <given-names>B.</given-names></name></person-group> (<year>2012</year>). <article-title>A virtual training tool for giving talks</article-title>. <source>Lect. Notes Comput. Sci.</source> <volume>7522</volume>, <fpage>53</fpage>&#x02013;<lpage>66</lpage>.<pub-id pub-id-type="doi">10.1007/978-3-642-33542-6_5</pub-id></citation></ref>
<ref id="B20"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Larsen</surname> <given-names>R. J.</given-names></name> <name><surname>Buss</surname> <given-names>D. M.</given-names></name></person-group> (<year>2013</year>). <source>Personality Psychology: Domains of Knowledge about Human Nature</source>. <publisher-loc>Maidenhead</publisher-loc>: <publisher-name>McGraw Hill</publisher-name>.</citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>C.</given-names></name> <name><surname>Rincon</surname> <given-names>G. A.</given-names></name> <name><surname>Meyer</surname> <given-names>G.</given-names></name> <name><surname>Hoellerer</surname> <given-names>T.</given-names></name> <name><surname>Bowman</surname> <given-names>D. A.</given-names></name></person-group> (<year>2013</year>). <article-title>The effects of visual realism on search tasks in mixed reality simulation</article-title>. <source>IEEE Trans. Vis. Comput. Graph</source> <volume>19</volume>, <fpage>547</fpage>&#x02013;<lpage>556</lpage>.<pub-id pub-id-type="doi">10.1109/tvcg.2013.41</pub-id><pub-id pub-id-type="pmid">23428438</pub-id></citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>J. M.</given-names></name> <name><surname>Ku</surname> <given-names>J.</given-names></name> <name><surname>Jang</surname> <given-names>D. P.</given-names></name> <name><surname>Kim</surname> <given-names>D. H.</given-names></name> <name><surname>Choi</surname> <given-names>Y. H.</given-names></name> <name><surname>Kim</surname> <given-names>I. Y.</given-names></name> <etal/></person-group> (<year>2002</year>). <article-title>Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures: cyberpsychology &#x00026; behavior</article-title>. <source>Cyberpsychol. Behav.</source> <volume>5</volume>, <fpage>191</fpage>&#x02013;<lpage>195</lpage>.<pub-id pub-id-type="doi">10.1089/109493102760147169</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ling</surname> <given-names>Y.</given-names></name> <name><surname>Nefs</surname> <given-names>H. T.</given-names></name> <name><surname>Morina</surname> <given-names>N.</given-names></name> <name><surname>Heynderickx</surname> <given-names>I.</given-names></name> <name><surname>Brinkman</surname> <given-names>W.-P.</given-names></name></person-group> (<year>2014</year>). <article-title>A meta-analysis on the relationship between self-reported presence and anxiety in virtual reality exposure therapy for anxiety disorders</article-title>. <source>PLoS ONE</source> <volume>9</volume>:<fpage>e96144</fpage>.<pub-id pub-id-type="doi">10.1371/journal.pone.0096144</pub-id><pub-id pub-id-type="pmid">24801324</pub-id></citation></ref>
<ref id="B24"><citation citation-type="web"><person-group person-group-type="author"><name><surname>Lucas</surname> <given-names>S. E.</given-names></name></person-group> (<year>2016</year>). <source>Speech Evaluation Form</source>. Available at: <uri xlink:href="http://highered.mheducation.com/sites/007313564x/student_view0/speech_evaluation_forms.html">http://highered.mheducation.com/sites/007313564x/student_view0/speech_evaluation_forms.html</uri></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>MacEdonio</surname> <given-names>M. F.</given-names></name> <name><surname>Parsons</surname> <given-names>T. D.</given-names></name> <name><surname>Digiuseppe</surname> <given-names>R. A.</given-names></name> <name><surname>Weiderhold</surname> <given-names>B. K.</given-names></name> <name><surname>Rizzo</surname> <given-names>A. A.</given-names></name></person-group> (<year>2007</year>). <article-title>Immersiveness and physiological arousal within panoramic video-based virtual reality</article-title>. <source>Cyberpsychol. Behav.</source> <volume>10</volume>, <fpage>508</fpage>&#x02013;<lpage>515</lpage>.<pub-id pub-id-type="doi">10.1089/cpb.2007.9997</pub-id><pub-id pub-id-type="pmid">17711358</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>McMahan</surname> <given-names>R. P.</given-names></name> <name><surname>Bowman</surname> <given-names>D. A.</given-names></name> <name><surname>Zielinski</surname> <given-names>D. J.</given-names></name> <name><surname>Brady</surname> <given-names>R. B.</given-names></name></person-group> (<year>2012</year>). <article-title>Evaluating display fidelity and interaction fidelity in a virtual reality game</article-title>. <source>IEEE Trans. Vis. Comput. Graph</source> <volume>18</volume>, <fpage>626</fpage>&#x02013;<lpage>633</lpage>.<pub-id pub-id-type="doi">10.1109/tvcg.2012.43</pub-id><pub-id pub-id-type="pmid">22402690</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Menzel</surname> <given-names>K. E.</given-names></name> <name><surname>Carrell</surname> <given-names>L. J.</given-names></name></person-group> (<year>1994</year>). <article-title>The relationship between preparation and performance in public speaking</article-title>. <source>Commun. Educ.</source> <volume>43</volume>, <fpage>17</fpage>&#x02013;<lpage>26</lpage>.<pub-id pub-id-type="doi">10.1080/03634529409378958</pub-id></citation></ref>
<ref id="B28"><citation citation-type="web"><person-group person-group-type="author"><name><surname>Morreale</surname> <given-names>S.</given-names></name> <name><surname>Moore</surname> <given-names>M.</given-names></name> <name><surname>Surges-Tatum</surname> <given-names>D.</given-names></name> <name><surname>Webster</surname> <given-names>L.</given-names></name></person-group> (<year>2007</year>). <source>The Competent Speaker Speech Evaluation Form</source>. Available at: <uri xlink:href="http://www.une.edu/sites/default/files/Public-Speaking2013.pdf">http://www.une.edu/sites/default/files/Public-Speaking2013.pdf</uri></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nash</surname> <given-names>E. B.</given-names></name> <name><surname>Edwards</surname> <given-names>G. W.</given-names></name> <name><surname>Thompson</surname> <given-names>J. A.</given-names></name> <name><surname>Barfield</surname> <given-names>W.</given-names></name></person-group> (<year>2000</year>). <article-title>A review of presence and performance in virtual environments</article-title>. <source>Int. J. Hum. Comput. Interact.</source> <volume>12</volume>, <fpage>1</fpage>&#x02013;<lpage>41</lpage>.<pub-id pub-id-type="doi">10.1207/S15327590IJHC1201_1</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Negu&#x00163;</surname> <given-names>A.</given-names></name> <name><surname>Matu</surname> <given-names>S.-A.</given-names></name> <name><surname>Sava</surname> <given-names>F. A.</given-names></name> <name><surname>David</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Task difficulty of virtual reality-based assessment tools compared to classical paper-and-pencil or computerized measures: a meta-analytic approach</article-title>. <source>Comput. Human Behav.</source> <volume>54</volume>, <fpage>414</fpage>&#x02013;<lpage>424</lpage>.<pub-id pub-id-type="doi">10.1016/j.chb.2015.08.029</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nijholt</surname> <given-names>A.</given-names></name></person-group> (<year>2014</year>). <article-title>Breaking fresh ground in human-media interaction research</article-title>. <source>Front. ICT</source> <volume>1</volume>:<fpage>41</fpage>.<pub-id pub-id-type="doi">10.3389/fict.2014.00004</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nowak</surname> <given-names>K. L.</given-names></name> <name><surname>Biocca</surname> <given-names>F. A.</given-names></name></person-group> (<year>2003</year>). <article-title>The effect of the agency and anthropomorphism on users&#x02019; sense of telepresence, copresence, and social presence in virtual environments</article-title>. <source>Presence</source> <volume>12</volume>, <fpage>481</fpage>&#x02013;<lpage>494</lpage>.<pub-id pub-id-type="doi">10.1162/105474603322761289</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Opris</surname> <given-names>D.</given-names></name> <name><surname>Pintea</surname> <given-names>S.</given-names></name> <name><surname>Garcia-Palacios</surname> <given-names>A.</given-names></name> <name><surname>Botella</surname> <given-names>C.</given-names></name> <name><surname>Szamoskozi</surname> <given-names>S.</given-names></name> <name><surname>David</surname> <given-names>D.</given-names></name></person-group> (<year>2012</year>). <article-title>Virtual reality exposure therapy in anxiety disorders: a quantitative meta-analysis</article-title>. <source>Depress. Anxiety</source> <volume>29</volume>, <fpage>85</fpage>&#x02013;<lpage>93</lpage>.<pub-id pub-id-type="doi">10.1002/da.20910</pub-id><pub-id pub-id-type="pmid">22065564</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Optale</surname> <given-names>G.</given-names></name> <name><surname>Urgesi</surname> <given-names>C.</given-names></name> <name><surname>Busato</surname> <given-names>V.</given-names></name> <name><surname>Marin</surname> <given-names>S.</given-names></name> <name><surname>Piron</surname> <given-names>L.</given-names></name> <name><surname>Priftis</surname> <given-names>K.</given-names></name> <etal/></person-group> (<year>2010</year>). <article-title>Controlling memory impairment in elderly adults using virtual reality memory training: a randomized controlled pilot study</article-title>. <source>Neurorehabil. Neural Repair</source> <volume>24</volume>, <fpage>348</fpage>&#x02013;<lpage>357</lpage>.<pub-id pub-id-type="doi">10.1177/1545968309353328</pub-id><pub-id pub-id-type="pmid">19934445</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parsons</surname> <given-names>T. D.</given-names></name> <name><surname>Rizzo</surname> <given-names>A. A.</given-names></name></person-group> (<year>2008</year>). <article-title>Affective outcomes of virtual reality exposure therapy for anxiety and specific phobias: a meta-analysis</article-title>. <source>J. Behav. Ther. Exp. Psychiatry</source> <volume>39</volume>, <fpage>250</fpage>&#x02013;<lpage>261</lpage>.<pub-id pub-id-type="doi">10.1016/j.jbtep.2007.07.007</pub-id><pub-id pub-id-type="pmid">17720136</pub-id></citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pertaub</surname> <given-names>D.-P.</given-names></name> <name><surname>Slater</surname> <given-names>M.</given-names></name> <name><surname>Barker</surname> <given-names>C.</given-names></name></person-group> (<year>2002</year>). <article-title>An experiment on public speaking anxiety in response to three different types of virtual audience</article-title>. <source>Presence</source> <volume>11</volume>, <fpage>68</fpage>&#x02013;<lpage>78</lpage>.<pub-id pub-id-type="doi">10.1162/105474602317343668</pub-id></citation></ref>
<ref id="B38"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Poeschl</surname> <given-names>S.</given-names></name> <name><surname>Doering</surname> <given-names>N.</given-names></name></person-group> (<year>2015</year>). <article-title>Measuring co-presence and social presence in virtual environments &#x02013; psychometric construction of a german scale for a fear of public speaking scenario</article-title>. <source>Ann. Rev. Cyberther. Telemed.</source> <volume>13</volume>, <fpage>58</fpage>&#x02013;<lpage>63</lpage>.<pub-id pub-id-type="doi">10.3233/978-1-61499-595-1-58</pub-id><pub-id pub-id-type="pmid">26799880</pub-id></citation></ref>
<ref id="B39"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Powers</surname> <given-names>M. B.</given-names></name> <name><surname>Emmelkamp</surname> <given-names>P. M. G.</given-names></name></person-group> (<year>2008</year>). <article-title>Virtual reality exposure therapy for anxiety disorders: a meta-analysis</article-title>. <source>J. Anxiety Disord.</source> <volume>22</volume>, <fpage>561</fpage>&#x02013;<lpage>569</lpage>.<pub-id pub-id-type="doi">10.1016/j.janxdis.2007.04.006</pub-id><pub-id pub-id-type="pmid">17544252</pub-id></citation></ref>
<ref id="B40"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Reeves</surname> <given-names>B. R.</given-names></name></person-group> (<year>1991</year>). <source>&#x0201C;Being There&#x0201D;: Television as Symbolic Versus Natural Experience</source>. <publisher-loc>Standford, CA</publisher-loc>: <publisher-name>Stanford University Institute for Communication Research</publisher-name>.</citation></ref>
<ref id="B41"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Riley</surname> <given-names>J. M.</given-names></name></person-group> (<year>2001</year>). <source>The Utility of Measures of Attention and Spatial Awareness for Quantifying Telepresence [Ph.D. Dissertation]</source>. <publisher-loc>Mississippi</publisher-loc>: <publisher-name>Missisippi State University</publisher-name>.</citation></ref>
<ref id="B42"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Rothwell</surname> <given-names>J. D.</given-names></name></person-group> (<year>2004</year>). <source>In the Company of Others: An Introduction to Communication</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>McGraw Hill</publisher-name>.</citation></ref>
<ref id="B43"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sacau</surname> <given-names>A.</given-names></name> <name><surname>Laarni</surname> <given-names>J.</given-names></name> <name><surname>Hartmann</surname> <given-names>T.</given-names></name></person-group> (<year>2008</year>). <article-title>Influence of individual factors on presence</article-title>. <source>Comput. Human Behav.</source> <volume>24</volume>, <fpage>2255</fpage>&#x02013;<lpage>2273</lpage>.<pub-id pub-id-type="doi">10.1016/j.chb.2007.11.001</pub-id></citation></ref>
<ref id="B44"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schuemie</surname> <given-names>M. J.</given-names></name> <name><surname>van der Straaten</surname> <given-names>P.</given-names></name> <name><surname>Krijn</surname> <given-names>M.</given-names></name> <name><surname>van der Mast</surname> <given-names>C. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Research on presence in virtual reality: a survey</article-title>. <source>Cyberpsychol. Behav.</source> <volume>4</volume>, <fpage>183</fpage>&#x02013;<lpage>201</lpage>.<pub-id pub-id-type="doi">10.1089/109493101300117884</pub-id><pub-id pub-id-type="pmid">11710246</pub-id></citation></ref>
<ref id="B45"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schumann</surname> <given-names>C.</given-names></name> <name><surname>Schultheiss</surname> <given-names>D.</given-names></name></person-group> (<year>2009</year>). <article-title>Power and nerves of steel or thrill of adventure and patience? An empirical study on the use of different video game genres</article-title>. <source>J. Gaming Virtual Worlds</source> <volume>1</volume>, <fpage>39</fpage>&#x02013;<lpage>56</lpage>.<pub-id pub-id-type="doi">10.1386/jgvw.1.1.39_1</pub-id></citation></ref>
<ref id="B46"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sheridan</surname> <given-names>T. B.</given-names></name></person-group> (<year>1992</year>). <article-title>Musings on telepresence and virtual presence</article-title>. <source>Presence</source> <volume>1</volume>, <fpage>120</fpage>&#x02013;<lpage>126</lpage>.<pub-id pub-id-type="doi">10.1162/pres.1992.1.1.120</pub-id></citation></ref>
<ref id="B47"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname> <given-names>M.</given-names></name></person-group> (<year>2003</year>). <article-title>A note on presence terminology</article-title>. <source>Presence Connect</source> <volume>3</volume>, <fpage>3</fpage>.</citation></ref>
<ref id="B48"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments</article-title>. <source>Philos. Trans. R. Soc. Lond. B Biol. Sci.</source> <volume>364</volume>, <fpage>3549</fpage>&#x02013;<lpage>3557</lpage>.<pub-id pub-id-type="doi">10.1098/rstb.2009.0138</pub-id><pub-id pub-id-type="pmid">19884149</pub-id></citation></ref>
<ref id="B49"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname> <given-names>M.</given-names></name> <name><surname>Pertaub</surname> <given-names>D.-P.</given-names></name> <name><surname>Barker</surname> <given-names>C.</given-names></name> <name><surname>Clark</surname> <given-names>D. M.</given-names></name></person-group> (<year>2006</year>). <article-title>An experimental study on fear of public speaking using a virtual environment</article-title>. <source>Cyberpsychol. Behav.</source> <volume>9</volume>, <fpage>627</fpage>&#x02013;<lpage>633</lpage>.<pub-id pub-id-type="doi">10.1089/cpb.2006.9.627</pub-id><pub-id pub-id-type="pmid">17034333</pub-id></citation></ref>
<ref id="B50"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Slater</surname> <given-names>M.</given-names></name> <name><surname>Steed</surname> <given-names>A.</given-names></name> <name><surname>McCarthy</surname> <given-names>J.</given-names></name> <name><surname>Maringelli</surname> <given-names>F.</given-names></name></person-group> (<year>1998</year>). <article-title>The influence of body movements on presence in virtual environments</article-title>. <source>Hum. Factors</source> <volume>40</volume>, <fpage>469</fpage>&#x02013;<lpage>477</lpage>.<pub-id pub-id-type="doi">10.1518/001872098779591368</pub-id></citation></ref>
<ref id="B51"><citation citation-type="confproc"><person-group person-group-type="author"><name><surname>Steed</surname> <given-names>A.</given-names></name> <name><surname>Pan</surname> <given-names>Y.</given-names></name> <name><surname>Zisch</surname> <given-names>F.</given-names></name> <name><surname>Steptoe</surname> <given-names>W.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;The impact of a self-avatar on cognitive load in immersive virtual reality,&#x0201D;</article-title> in <conf-name>2016 IEEE Virtual Reality (VR)</conf-name>, (<conf-loc>Greenville, SC</conf-loc>), <fpage>67</fpage>&#x02013;<lpage>76</lpage>.</citation></ref>
<ref id="B52"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stickel</surname> <given-names>C.</given-names></name> <name><surname>Ebner</surname> <given-names>M.</given-names></name> <name><surname>Holzinger</surname> <given-names>A.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;The XAOS metric &#x02013; understanding visual complexity as measure of usability,&#x0201D;</article-title> in <source>HCI in Work and Learning, Life and Leisure</source>, eds <person-group person-group-type="editor"><name><surname>Leitner</surname> <given-names>G.</given-names></name> <name><surname>Hitz</surname> <given-names>M.</given-names></name> <name><surname>Holzinger</surname> <given-names>A.</given-names></name></person-group> (<publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>278</fpage>&#x02013;<lpage>290</lpage>.</citation></ref>
<ref id="B53"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Stramler</surname> <given-names>J. H.</given-names></name></person-group> (<year>1993</year>). <source>The Dictionary for Human Factors/Ergonomics</source>. <publisher-loc>Boca Raton, LA</publisher-loc>: <publisher-name>CRC Press</publisher-name>.</citation></ref>
<ref id="B54"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sveistrup</surname> <given-names>H.</given-names></name></person-group> (<year>2004</year>). <article-title>Motor rehabilitation using virtual reality</article-title>. <source>J. Neuroeng. Rehabil.</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>8</lpage>.<pub-id pub-id-type="doi">10.1186/1743-0003-1-10</pub-id><pub-id pub-id-type="pmid">15679945</pub-id></citation></ref>
<ref id="B55"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wallach</surname> <given-names>H. S.</given-names></name> <name><surname>Safir</surname> <given-names>M. P.</given-names></name> <name><surname>Bar-Zvi</surname> <given-names>M.</given-names></name></person-group> (<year>2009</year>). <article-title>Virtual reality cognitive behavior therapy for public speaking anxiety: a randomized clinical trial</article-title>. <source>Behav. Modif.</source> <volume>33</volume>, <fpage>314</fpage>&#x02013;<lpage>338</lpage>.<pub-id pub-id-type="doi">10.1177/0145445509331926</pub-id><pub-id pub-id-type="pmid">19321811</pub-id></citation></ref>
<ref id="B56"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ward</surname> <given-names>M. F.</given-names></name> <name><surname>Wender</surname> <given-names>P. H.</given-names></name> <name><surname>Reimherr</surname> <given-names>F. W.</given-names></name></person-group> (<year>1993</year>). <article-title>The Wender Utah Rating Scale: an aid in the retrospective diagnosis of childhood attention deficit hyperactivity disorder</article-title>. <source>Am. J. Psychiatry</source> <volume>150</volume>, <fpage>885</fpage>&#x02013;<lpage>890</lpage>.</citation></ref>
<ref id="B57"><citation citation-type="book"><person-group person-group-type="editor"><name><surname>Wiederhold</surname> <given-names>B. K.</given-names></name> <name><surname>Wiederhold</surname> <given-names>M. D.</given-names></name></person-group> (eds) (<year>2005b</year>). <source>Virtual Reality Therapy for Anxiety Disorders: Advances in Evaluation and Treatment</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>American Psychological Association</publisher-name>.</citation></ref>
<ref id="B58"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wiederhold</surname> <given-names>B. K.</given-names></name> <name><surname>Wiederhold</surname> <given-names>M. D.</given-names></name></person-group> (<year>1998</year>). <article-title>A review of virtual reality as a psychotherapeutic tool</article-title>. <source>Cyberpsychol. Behav.</source> <volume>1</volume>, <fpage>45</fpage>&#x02013;<lpage>52</lpage>.<pub-id pub-id-type="doi">10.1089/cpb.1998.1.45</pub-id></citation></ref>
<ref id="B59"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Wiederhold</surname> <given-names>B. K.</given-names></name> <name><surname>Wiederhold</surname> <given-names>M. D.</given-names></name></person-group> (<year>2005a</year>). <article-title>&#x0201C;Anxiety disorders and their treatment,&#x0201D;</article-title> in <source>Virtual Reality Therapy for Anxiety Disorders: Advances in Evaluation and Treatment</source>, eds <person-group person-group-type="editor"><name><surname>Wiederhold</surname> <given-names>B. K.</given-names></name> <name><surname>Wiederhold</surname> <given-names>M. D.</given-names></name></person-group> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>American Psychological Association</publisher-name>), <fpage>31</fpage>&#x02013;<lpage>45</lpage>.</citation></ref>
<ref id="B60"><citation citation-type="book"><person-group person-group-type="author"><name><surname>Youngblut</surname> <given-names>C.</given-names></name></person-group> (<year>2003</year>). <source>Experience of Presence in Virtual Environments</source>. <publisher-loc>Alexandria, VA</publisher-loc>. Available at: <uri xlink:href="http://www.dtic.mil/cgi-bin/GetTRDoc?AD&#x0003D;ADA427495">www.dtic.mil/cgi-bin/GetTRDoc?AD&#x0003D;ADA427495</uri></citation></ref>
</ref-list>
<fn-group>
<fn id="fn1"><p><sup>1</sup>The QUEST-VR framework was developed in collaboration with Doug A. Bowman, Virginia Polytechnic Institute and State University. A publication of the framework in itself is in preparation [Poeschl, S., Bowman, D. A., and Doering, N. Determining quality for virtual reality application design and evaluation &#x02013; a survey on the role of application areas (in preparation)].</p></fn>
</fn-group>
</back>
</article>