Skip to main content

REVIEW article

Front. Virtual Real., 06 August 2021
Sec. Virtual Reality and Human Behaviour
Volume 2 - 2021 | https://doi.org/10.3389/frvir.2021.630731

Affective Visualization in Virtual Reality: An Integrative Review

www.frontiersin.orgAndres Pinilla1,2* www.frontiersin.orgJaime Garcia2 www.frontiersin.orgWilliam Raffe2 www.frontiersin.orgJan-Niklas Voigt-Antons1,3 www.frontiersin.orgRobert P. Spang1 www.frontiersin.orgSebastian Möller1,3
  • 1Quality and Usability Lab, Institute for Software Technology and Theoretical Computer Science, Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
  • 2UTS Games Studio, Faculty of Engineering and IT, University of Technology Sydney UTS, Sydney, NSW, Australia
  • 3German Research Center for Artificial Intelligence (DFKI), Berlin, Germany

A cluster of research in Affective Computing suggests that it is possible to infer some characteristics of users’ affective states by analyzing their electrophysiological activity in real-time. However, it is not clear how to use the information extracted from electrophysiological signals to create visual representations of the affective states of Virtual Reality (VR) users. Visualization of users’ affective states in VR can lead to biofeedback therapies for mental health care. Understanding how to visualize affective states in VR requires an interdisciplinary approach that integrates psychology, electrophysiology, and audio-visual design. Therefore, this review aims to integrate previous studies from these fields to understand how to develop virtual environments that can automatically create visual representations of users’ affective states. The manuscript addresses this challenge in four sections: First, theories related to emotion and affect are summarized. Second, evidence suggesting that visual and sound cues tend to be associated with affective states are discussed. Third, some of the available methods for assessing affect are described. The fourth and final section contains five practical considerations for the development of virtual reality environments for affect visualization.

Introduction

Virtual Reality (VR) systems offer endless possibilities for the development of interactive experiences. They are used for the development of tools in diverse areas such as rehabilitation therapy (Garcia and Navarro, 2014), exergames (Arndt et al., 2018; Greinacher et al., 2020), and robotics (Burdea et al., 2013). Their potential is particularly promising when combined with technological advances in Affective Computing, allowing to interpret users’ affective states as computer commands (Picard et al., 2001; Sitaram et al., 2011; Hernandez et al., 2014; Leslie et al., 2015) and adapt the content of a virtual environment accordingly (Bermudez i Badia et al., 2019).

Traditional psychological tasks for the treatment and diagnosis of mental disorders can be replaced by VR systems (Koenig et al., 2011; Belger et al., 2019; Blum J et al., 2019, 2020). These new tools are less time-consuming and provide more realistic environments, hence higher ecological validity. Furthermore, VR might be helpful for at least two types of therapy: exposure therapy and biofeedback therapy. Exposure therapy is commonly used to treat anxiety disorders caused by phobias. Patients are systematically exposed to the stimuli that trigger the phobia in a controlled environment and with a therapist’s guidance. VR is useful for exposure therapy because it allows delivering realistic experiences while providing control over the stimuli. Previous research suggests that exposure therapy in VR might be effective for the treatment of a least three types of phobias: social phobia (Shiban et al., 2015), claustrophobia (Shiban et al., 2016), and spider-phobia (Peperkorn et al., 2014).

Biofeedback therapy is used to provide real-time feedback to the patient about their physiological activity while they perform a task (J. Blum et al., 2020). The characteristics of the task depend on the purpose of the therapy. For example, Blandón et al. (2016) developed a biofeedback game for training attention control in children with Attention Deficit Hyperactivity Disorder (ADHD). The player performed tasks in a virtual farm, such as collecting fruits and repairing a pathway. Participants were challenged to increase their concentration, impulsivity control, and sustained attention to do these tasks. Simultaneously, electroencephalography (EEG) signals were processed in real-time to identify EEG activity associated with attention state. The player obtained a game score if attention state was detected in the EEG signals.

Similarly, Cavazza et al. (2014) developed an interactive experience to enhance empathy using neurofeedback. The participant interacted with a fictional doctor that was going through a difficult situation. Simultaneously, the participant’s EEG signals were analyzed to estimate the affective response towards the doctor. If the system detected a positive affective response in the player, the storyline would change positively (the doctor would struggle less). It was expected that those changes in the story line would reinforce the supportive, empathetic behavior of the player.

Patients who lack affective self-regulation could benefit from VR biofeedback therapy to train affective self-control, fostering mood regulation (Desmet, 2015). Li et al. (2016) conducted an experiment where twenty-three participants’ brain activity was analyzed in real-time using functional Magnetic Brain Imaging (fMRI). They asked participants to evoke a happy or sad memory and provided feedback about their affective state. The feedback was provided with a bar on a screen. The bar’s level increased when the fMRI data indicated that the participant successfully evoked the happy or sad memory. Results suggest that providing visual feedback allowed participants to learn how to modulate their neural activity. But it is not clear how to implement this finding in a therapeutic application that can be accessible for a large population because 1) fMRI is an expensive technology that is not accessible for most people, 2) participants must remain motionless for long periods during fMRI recordings; otherwise, the data is corrupted, and 3) an ideal therapeutic application should consist of engaging content that motivates users to use the system. These challenges could be solved by measuring brain activity with a less expensive and more portable method than fMRI, such as electroencephalography (EEG). The visual feedback could be provided using game-like elements.

The development of an affective visualization tool in VR would require at least two components: 1) A set of VR stimuli with affective content whose properties can be adjusted in real-time, and 2) a technique to continuously assess affective states in an online fashion without interrupting the VR experience. This literature review was elaborated to understand how to develop those components. Both requirements are addressed in three subsections. Firstly, theories related to emotions and affect are presented. Secondly, findings related to visual and sound cues that are associated with affective responses are analyzed. And thirdly, some of the most common methods for detecting affective states are summarized.

Theoretical Models of Emotion and Affect

The terms emotion and affect are often used interchangeably in the literature, but they are not exactly the same. There is not a general agreement about how to define these concepts. In this manuscript, emotions are defined as mental states that coordinate the operation of cognitive processes. This definition is based on the assumption that the human mind is designed as a computational system that consists of a series of information-processing programs (Putnam, 1967). Emotions are a particular type of program that coordinate other programs’ operation (Cosmides and Tooby, 1994). Affect is defined as the cognitive representation of the bodily changes that come with emotions (Wundt, 1897; Barrett and Bliss-Moreau, 2009). Neither emotion nor affect can be directly observed or measured. However, the definition of affect is directly associated with the physiological states, while the definition of emotion is not. Therefore, it is reasonable to use electrophysiological signals to infer affective states, which might allow to infer some characteristics of emotional states.

Emotion Theories

The Ortony, Clore and Collins (OCC) theory of emotions (Ortony et al., 1988) has been widely used in the field of computer science to model users’ emotional responses (e.g., Conati & Zhou, 2002; Jaques and Vicari, 2007). This theory describes emotions in terms of twenty-two categories and assumes a clear distinction between each category. This approach is compatible with existing emotion recognition algorithms because these are usually based on categorizing emotions (e.g., Harischandra and Perera, 2012; Mavridou et al., 2017). According to the OCC theory (Ortony et al., 1988), the first step in an emotional response is the perception of the situation. Then the situation is evaluated (appraisal), and finally, the emotional response emerges. However, this theory does not consider the physiological changes associated with emotions.

Similarly, Robert Plutchnik proposed a structural model of emotions (Plutchik, 1982), commonly known as Plutchnik’s wheel of emotions. This model consists of eight primary states: ecstasy, adoration, terror, amazement, grief, loathing, rage, and vigilance. According to Pluthnik’s theory, any emotion can be described as a combination of a subset of those basic states. Here emotions are defined as a sequence of reactions towards a stimulus. This sequence includes a cognitive evaluation of the stimuli (appraisal), feelings (subjective experience of the emotion), autonomic neural activity, and behavioral responses.

There are at least three other major emotion theories: the James-Lang Theory (Lange and James, 1922), the Cannon-Bard Theory (Cannon, 1927; Bard, 1934), and the Schachter-Singer Theory (Schachter and Singer, 1962). According to Shiota and Kalat (2012), these theories have in common the assumption that emotional responses have four components but differ in the order those components take place during an emotional response. The components are:

• Appraisal: The cognitive, rationalized evaluation of the context where the emotional response is produced.

• Feeling: The subjective, momentary experience of the emotion.

• Physiological change: The bodily changes produced by the emotional response.

• Behavior: The observable conduct that comes with the emotion.

According to the Lange and James (1922), the first step in an emotional response is the cognitive evaluation of the situation. Then, physiological changes are produced in the body, at the same time that a behavioral response is produced. Lastly, feelings take place.

The Cannon-Bard Theory (Cannon, 1927; Bard, 1934) proposes that all the elements of an emotional response are independent of each other, and there is no particular order in which they occur. This theory is not compatible with the convincing amount of evidence indicating that emotional stimuli tend to trigger automatic changes in the body (e.g., Dimberg et al., 2000; Huster et al., 2009; Thayer et al., 2009). Overall, these previous studies suggest interdependence between physiological changes and the other components of emotion.

According to the Schachter and Singer (1962), physiological changes occur first. Then the user tries to find an explanation in the environment for those physiological changes. Depending on the explanation found, a cognitive label is assigned to the bodily changes perceived. Therefore, the physiological changes indicate the intensity of the emotional experience, but cognitive factors determine the emotion’s valence (pleasant vs unpleasant).

Theoretical Models of Affect

Theoretical models of affect can be classified into two major groups: discrete and dimensional models. Discrete models are based on a categorical division of affective responses, while dimensional models represent affect as an array of continuous variables. Both types of models are commonly used in Affective Computing to build affect recognition models (e.g., Sitaram et al., 2011; Hernandez et al., 2014; Leslie et al., 2015).

In broad terms, discrete models propose the existence of a few primary states, such as happiness, sadness, and anger. Affective responses are a combination of a subset of those fundamental states. A prominent example of the influence of discrete models in psychology research can be found in an experiment conducted by Ekman and Friesen (1971) in New Guinea. In this experiment, stories with emotional content were told to 153 participants. One hundred thirty of them (84.97%) had no previous contact with the western culture. After each story had been told, participants saw a series of pictures of facial expressions and were asked to choose the more coherent face with the story. Interestingly, participants associated similar facial expressions to the same stories, regardless of their cultural background. Based on this evidence, it was proposed that there are at least six facial expressions that are universal (i.e., they are not affected by culture): happiness, anger, sadness, disgust, surprise, and fear. These results are consistent with earlier contributions from Charles Darwin, who pointed out the existence of activation patterns in facial muscles which are associated with affective states (Darwin, 1872; Ekman, 2006).

Dimensional models have their roots in the early contributions of Wilhelm Wundt, who proposed that affective responses have three dimensions: valence (pleasant–unpleasant), arousal (arousing–subduing), and intensity (strain–relaxation) (Wundt, 1897). On this basis, the Circumplex Model of Affect (Russell, 1980) was developed, representing affect in a two-dimensional space, where valence and arousal are equivalent to the x-axis and y-axis, respectively.

Other authors have proposed the Evaluative Space Model (ESM) (Cacioppo et al., 1997), which has three dimensions: Negativity in the x-axis, Positivity in the y-axis, and Net predisposition (to withdraw or approach a stimulus) in the z-axis. Unlike the Circumplex Model of Affect (Russell, 1980), the ESM (Cacioppo et al., 1997) contemplates the existence of affective responses with simultaneous negative and positive activation (“bitter-sweet” affective states). For example, while playing a terror video game, the user might feel fear, and at the same time, might feel excited because is aware that there is not a real danger. An analysis about dimensional models of affect can be found in Mattek et al. (2017).

The ESM proposes the existence of the negativity bias and the positivity offset. The negativity bias implies that negative activation produces more changes in the motivation to withdraw or approach stimuli than positive activation. Evidence supporting the existence of the negativity bias indicates that negative stimuli tend to produce more salient behaviors than positive stimuli (Sutherland and Mather, 2012), and negative stimuli tend to be associated with higher arousal than positive stimuli (Lang et al., 2008). The negativity bias suggests that terror video games should trigger higher arousal than video games associated with positive affective states. However, a recent study indicates that the arousal level triggered by terror video games is slightly lower than the arousal triggered by video games associated with positive affective states (Martínez-Tejada et al., 2021).

The positivity offset implies a slight positive motivation to approach unknown stimuli in a neutral environment. This mechanism has been associated with humans’ natural tendency to explore new, unthreatening environments, even when that behavior is not associated with a reward (Cacioppo et al., 1997, p. 12). Further research about the positivity offset could help understand how to motivate VR users to explore virtual environments. For example, to stimulate engagement of players with VR games.

Visual and Sound Cues

Building a virtual environment for affective visualization requires content that any user can associate with a wide range of affective states, regardless of cultural differences or personal preferences. Therefore, this section presents recent studies suggesting an association between features of audio-visual elements and affective states. We aim to provide general guidelines for representing affect with audio-visual elements, rather than defining a set of rigid rules.

Visual Cues

Rounded objects are associated with higher valence and lower arousal than sharp objects (Bar & Neta, 2006). And rounded lines are perceived as more attractive than straight or angular lines (Aronoff et al., 1992; Aronoff, 2006). Given that attractiveness is associated with positive affective states, rounded lines are likely associated with positive valence. Additional studies suggest that visual complexity plays a role in the likability of objects. People tend to prefer extremely simple or extremely complex objects (Norman et al., 2010). Given that likability tends to trigger positive valence (Ryali et al., 2020), an intermediate level of complexity is more likely associated with negative valence.

A cross-cultural study showed that the most critical factors in the affective meaning of color are brightness and saturation, while hue has a secondary role (Gao et al., 2007). These results are consistent with evidence reported in Valdez & Mehrabian (1994) but contrast with recent studies indicating that hue has a significant role in the affective state associated with a color palette (Bartram et al., 2017). Additional evidence suggests that blue, green, and purple are among the most pleasant hues, while yellow is among the most unpleasant. Green-yellow, blue-green, and green are the most arousing, while purple-blue and yellow-red are among the least arousing (Palmer and Schloss, 2010). Similarly, it has been found that the most pleasant colors are those with higher saturation and brightness (Camgöz et al., 2002; Wilms and Oberfeld, 2018). However, other studies suggest that there are not universal associations between colors and affective states. People tend to like colors associated with objects they like and dislike colors associated with objects they dislike (Palmer and Schloss, 2010). Additional evidence indicates that color associations change according to the context where colors are used (Lipson-Smith et al., 2020), supporting the hypothesis that there are not universal associations between colors and affective states. Yet, it is possible to establish color palettes that allow to communicate affective states. For example, bright, unsaturated colors are more suitable to communicate calm, while dark, red colors are more suitable to communicate disturbance (Bartram et al., 2017).

Textures may influence the affective meaning of color (Lucassen et al., 2011; Ebe and Umemuro, 2015). This has been demonstrated by pairing colors with computer-generated textures and asking participants to rate the color-texture pairs using four scales: Warm-Cool, Masculine-Feminine, Hard-Soft, and Heavy-Light. Results suggest that texture significantly influences the evaluation on the Hard-Soft scale and has a minor impact on the other scales. However, this evidence does not allow to identify associations between particular texture patterns and affective responses.

Non-static visual elements have other visual properties besides color, shape, and texture. Some of these additional properties are speed, motion shape, direction, and path curvature. Fast-moving objects are associated with higher arousal than slow-moving objects (Feng et al., 2014; Piwek et al., 2015). But there are contradictory findings regarding the type of valence associated with speed. One study suggest that fast movements are related to positive affective states (Piwek et al., 2015), while other study indicates the opposite (Feng et al., 2014).

Linear motion with straight paths is associated with low arousal and positive valence (Feng et al., 2014). Jerky paths are associated with higher arousal than straight paths in linear motion (Lockyer et al., 2011; Feng et al., 2014). But the curvature of the path has no incidence in affective associations when applied to spherical or radial motion (Feng et al., 2014). Inward movements are related to more positive affective states than outward movements (Feng et al., 2014). Downwards-right motion tends to be linked to positive states, while upwards-left motion tends to be associated with negative states (Lockyer et al., 2011). In general, angular paths are related to more negative affective states than linear paths (Lockyer et al., 2011). And spherical motion patterns tend to be associated with higher arousal than linear motion patterns.

Sound Cues

Previous research indicates that the location of a sound source influences the affective states associated with that sound. When the user cannot see where the object is (outside of the field of view), it is often associated with more arousing affective states than when the user can see it (inside the field of view) (Drossos et al., 2015; Tajadura-Jiménez, Larsson, et al., 2010a). Similarly, sounds located further away in the space are related to less arousing responses (Tajadura-Jiménez et al., 2008). The perception of an approaching sound is associated with more arousing responses than the perception of it moving away (Tajadura-Jiménez, Väljamäe, et al., 2010b). These phenomena are likely to be linked to mechanisms enforced by evolution (Cosmides and Tooby, 1994). Our primitive ancestors had more chances to survive if they were aware of the most potentially dangerous objects, such as those they could not see, were closer to them, or were approaching them.

The reverberation of the sound, which is associated with space’s size, can influence affective associations (Tajadura-Jiménez, Larsson, et al., 2010a). Lower reverberation (smaller rooms) is linked to more pleasant states than higher reverberation (larger rooms). Perhaps, because the primitive human being was better protected from predators in closed spaces, leading to an evolutionary process that favors the activation of attentional resources when we are in open areas.

Other studies indicate that asking people to rate pictures with affective content while listening to the sound of a heartbeat can influence their affective evaluations, as well as their heart rate (Tajadura-Jiménez et al., 2008). Here, the sound of a heart rate faster than the listener’s one tends to increase their heart rate, while a slower sound seems to relax the listener’s heart rate. Therefore, playing a fast heartbeat in the background might be an effective way of representing an increase in arousal.

On the other hand, music is pivotal for affective visualization because it can contribute to create more immersive experiences. However, it is a vast topic that will not be fully covered in this manuscript. Yet, it is important to mention that tempo influences music’s affective perception (Fernández-Sotos et al., 2016). Faster tempo tends to be associated with higher arousal ratings, while slower tempo tends to be associated with lower arousal ratings. To the extent of our knowledge, there is no evidence suggesting that tempo influences valence ratings.

Major and minor chords are associated with positive and negative affective states, respectively (Gerardi and Gerken, 1995). Similarly, dissonant harmonies tend to be strongly associated with anger, and to a lesser extent, with fear (Petri, 2009). And it is possible to compose music based on people’s affective states (Williams et al., 2017). However, it remains an open question whether it is feasible to do it in real-time, based on the user’s electrophysiological signals.

Personalized Affective Visualizations

There might be individual differences in the affective states that each user associates with the same audio-visual stimuli. These individual differences could be amplified as a consequence of personal experiences. An ideal system for affective visualization should account for those individual differences, delivering personalized visual representations of affective states, similar to Bermudez i Badia et al., (2019).

Semertzidis et al. (2020) developed an Augmented Reality (AR) system that automatically creates visual representations of the user’s affective states. The visualizations consisted of fractals generated using Procedural Content Generation (PCG). The visual properties of the fractals varied according to the affective state detected in the user. However, the evidence reported by Semertzidis et al. (2020) does not allow to establish whether participants perceived that the fractals’ graphical properties represented their affective states.

Additional studies indicate that it is possible to use PCG to create content dynamically, adjusting it to the preferences of the user. This approach is known as experience-driven procedural content generation (EDPCG) (Yannakakis and Togelius, 2011; Raffe et al., 2015). In broad terms, EDPCG consists of an iterative process where the content is constantly modified based on the user’s feedback.

The general functioning of EDPCG is the same as an evolutionary algorithm (EA), which is an optimization process inspired by natural evolution. In a natural environment, the organisms that are better adapted to their habitat tend to have more reproductive success, hence more likely to pass their genes to the next generation. Similarly, objects can be created programmatically in a virtual environment and tested to identify the most successful ones. The criteria to identify which objects are more successful is based on a previously defined goal. This goal is defined by the developer based on the purpose of the application. During each iteration, the objects that are more successful at reaching the goal are identified. In the following iterations, new sets of objects are created, and the characteristics of the most successful objects tend to remain, whereas the characteristics of the least successful tend to disappear. It is assumed that repeating this process several times allows to reach the optimal parameters required to achieve the goal. For example, if the goal is to create personalized visual representations of positive affective states, and the EA detects that the user tends to associate red, rounded objects with positive affective states, the game would produce objects that would tend to be more red and more angular. An introduction to EA can be found in Eiben and Smith (Eiben and Smith, 2015).

Additional research indicates that it is possible to create automatically visual compositions in VR using Deep Convolutional Neural Networks (DCNN) (Kitson et al., 2019). Overall, the process consists of merging features from two images to create a third image. This approach could be combined with EDPCG (Yannakakis and Togelius, 2011; Raffe et al., 2015) to create personalized affective visualizations. The process would involve at least three steps: 1) Create a set of VR content that all users will observe and used that content as a baseline. This initial set of content could be developed following the guidelines described in Table 1; 2) Capture user feedback about the visual stimuli to establish the affective state that each user associates to each piece of VR content; And 3) use DCNNs to merge features of the initial content onto new, personalized VR content.

TABLE 1
www.frontiersin.org

TABLE 1. Summary of audio-visual cues associated with affective states, according to previous studies.

Assessment of Affective States

Users’ feedback should be captured using methods that do not interrupt the VR experience, such as body movements (see Section Behavioral Measures) or electrophysiological signals, similar to Georgiou and Demiris (2017). Methods for assessing affective states can be grouped into three categories: self-report questionnaires, behavioral measures, and electrophysiological signals. Each method has advantages and disadvantages that will be discussed below.

Self-Reports

Self-reports allow participants to evaluate their affective state by answering a series of questions. They can be used to verify the accuracy of the acquired information through other methods, such as behavioral and electrophysiological signals. Data collected through self-reports are often used as a ground-truth in the field of HCI.

In general, self-report measures are relatively easy to implement because they only require to display a series of questions on a paper sheet or a screen. Unlike behavioral and electrophysiological methods, self-reports are considered a direct measure because they allow asking participants directly about their mental states (Perkis et al., 2020). However, they are susceptible to be biased by rational processes. For example, participants who believe that it is expected from them to respond in a certain way might adjust their responses to fulfill that expectation, causing a phenomenon known as experimenter bias (Fisher, 1993). Some available tools for the assessment of affective responses are the Positive and Negative Affect Schedule (PANAS) (Watson et al., 1988), Self-Assessment Manikin (SAM) (Bradley and Lang, 1994), and Pick a Mood (PAM) (Desmet et al., 2016). The PANAS (see Figure 1) consists of 20 words related to negative and positive feelings (ten negatives and ten positives). Participants use those words to report their affective state. Each word can receive a rating from one to 5.

FIGURE 1
www.frontiersin.org

FIGURE 1. Female character of Pick A Mood (PAM), taken from Desmet et al. (2016). Eight discrete states are represented, plus a neutral one.

The SAM (Bradley and Lang, 1994) is an instrument that uses three scales: valence (pleasant/unpleasant), arousal (tension/relaxation) and dominance (inhibition/uninhibition). Each scale has five pictograms. Participants can select the blank spaces between each pictogram to indicate intermediate states. Therefore, answers to each scale can take values between one and 9 (see Figure 2). Given that this instrument is based on dimensions, it is compatible with dimensional models of affect. The SAM (Bradley and Lang, 1994) is one of the most established instruments for assessing affect (over 7.000 citations) and has been used for the development of batteries of stimuli with emotional content, such as the International Affective Pictures System (IAPS) (Lang et al., 2008) and the DEAP dataset (Koelstra et al., 2012).

FIGURE 2
www.frontiersin.org

FIGURE 2. From top to bottom: valence, arousal, and dominance scales of the Self-Assessment Manikin (SAM). Taken from (Bradley and Lang, 1994).

On the other hand, the PAM (Desmet et al., 2016) is based on discrete states. Therefore, it is compatible with discrete models of affect. This instrument also uses pictorial cues to assess participant’s states. There are eight mood types plus a neutral one: excited, cheerful, relaxed, calm, bored, sad, irritated, and tense. There are three characters for each of these states: a man, a woman, and a robot (gender-neutral character). In comparison to the SAM (Bradley and Lang, 1994), PAM’s characters (Desmet et al., 2016) are more similar to a real human being (see Figure 1), which might be an advantage because it could be easier for participants to feel identified with the characters of the PAM (Desmet et al., 2016).

The PAM has been used to understand how to design objects and experiences that could stimulate mood regulation (Desmet, 2015), analyze the effect of immersive virtual environments on gaming Quality of Experience (QoE) (Hupont et al., 2015), and analyze whether the effect of color on affective states varies across different VR rooms (Lipson-Smith et al., 2020). Using paper questionnaires to analyze experiences in virtual environments might require to interrupt the VR experience. This limitation can potentially be counterbalanced by using subjective rating scales inside the virtual environment (Voigt-Antons et al., 2020).

Behavioral Measures

Behavioral measures allow inferring affective states from observable conducts, such as body movements (Bull, 1978; Robitaille and McGuffin, 2019), voice patterns (Scherer and Oshinsky, 1977; Cordaro et al., 2016), and facial expressions (Ekman and Friesen, 1971). During an experiment conducted by Bull (1978), participants listened to a series of audio recordings with emotional content while their body movements were videotaped. Results suggested that sadness is associated with dropping the head while boredom is related to leaning the face in one hand. Building on that, recent research indicates that it is possible to infer arousal from body movements in virtual reality users (Kapur et al., 2005; Robitaille and McGuffin, 2019). In general, faster body movements are associated with higher arousal.

It is possible to automatically analyze users’ affective states based on their voice patterns (Vogt et al., 2008). Usually, a set of features are defined and used to build a classification model. Some of the features used for automatic speech emotion recognition are pitch, loudness, and tempo (Vogt et al., 2008; Polzehl et al., 2011). This approach is coherent with evidence suggesting that changes in vocalization patterns have an effect on the affective evaluation of speech, e.g. (Scherer & Oshinsky, 1977; Banse and Scherer, 1996).

Eye-tracking has been an essential measure of various individual states or even personality traits (Hoppe et al., 2018). Greinacher & Voigt-Antons (2020) demonstrated recently how this measure could be easily obtained from modern smartphones using built-in system libraries (Greinacher and Voigt-Antons, 2020). The accuracy of this approach is comparable to other webcam or selfie-cam-based systems. However, having eye-tracking systems easily accessible in millions of devices opens up opportunities for remote or in-the-field studies with a much higher ecological validity than studies relying on heavy equipment traditionally used in laboratory investigations.

As mentioned in section Introduction. Theoretical models of affect, facial expressions are associated with affective states (Ekman and Friesen, 1971). These expressions can be analyzed visually and described in terms of the Facial Action Coding System (FACS) (Ekman and Friesen, 1976). The FACS is an instrument that describes all the possible movements of the facial muscles. Each movement is defined as an Action Unit (AU). Facial expressions can be described as a combination of a subset of all the Action Units defined in the FACS (Ekman and Friesen, 1976). In a study conducted by Porcu et al. (2020), AUs were used for real-time analysis of the facial expressions of video streaming users. Additional studies suggest that human facial expressions can be collected using crowdsourcing techniques (McDuff et al., 2012), and its analysis can be optimized using statistical models that adapt automatically to the characteristics of the data (Feffer et al., 2018). However, facial recognition with camera sensors might be challenging to implement in VR because the Head-Mounted Display (HMD) covers the user’s face. Therefore, facial electromyography (fEMG), a technique introduced in the following section, might be more suitable for capturing VR users’ facial expressions (Mavridou et al., 2017).

Electrophysiology

Electrophysiological methods allow measuring changes in the electrical potentials of the body. Usually, facial electromyography (fEMG), electrocardiography (ECG), and electroencephalography (EEG) are used to record facial muscle, heart, and brain activity, respectively. This section focuses on methods to infer emotions in terms of the Circumplex Model of Affect (Russell, 1980) (see Section Theoretical Models of Affect). Therefore, the focus is on techniques that can be used to infer valence and arousal. There are many approaches for affect detection using electrophysiological signals that are not based on the Circumplex Model of Affect (Russell, 1980) and are not included in this manuscript.

Arousal can be inferred from features extracted from ECG signals. The beat-to-beat intervals of the ECG signal (often referred to as RR-Intervals, RRI) are extracted, detecting its peaks and calculating the time lapse between each peak. These RRIs are used to analyze the heart rate variability (HRV). Prominent examples of time-domain features used to analyze HRV are the root mean square of successive differences (RMSSD) and the standard deviation of NN intervals (SDNN). It has been found that higher HRV is associated with higher emotional arousal (Thayer et al., 2009). It is possible to extract features from the ECG signal in the frequency domain by calculating the LF/HF ratio. The low-frequency component (LF) (0.04–0.15 Hz) is associated with parasympathetic activity, while the high-frequency component (HF) (0.15–0.4 Hz) is associated with sympathetic activity (Malik et al., 1996). The activation of the parasympathetic system is associated with relaxation, and activation of the sympathetic system is associated with arousal. Therefore, increased activity in the HF component indicates higher arousal (Pagani et al., 1984). Further research has shown that it is possible to infer arousal from EEG signals in VR users employing long short-term memory (LSTM) recurrent neural networks (RNN) (Hofmann et al., 2018).

A recent study compared the benefits of implementing HRV biofeedback in virtual reality with a traditional HRV biofeedback therapy (Blum S et al., 2019), suggesting that the VR implementation produces more benefits for users in terms of relaxation self-efficacy, reduced mind wandering, and control of attentional resources. A similar approach was proposed in Blum et al. (2020), introducing a breathing biofeedback algorithm. This algorithm combines features extracted from electrocardiography activity with data inferred from diaphragm movements. The experiment was conducted using a chest band (Polar H10), which is a reliable, relatively inexpensive sensor. Results suggest that this approach can help to foster more regular and slower breathing in VR users.

Valence can be inferred from EMG and EEG signals. Previous evidence suggests that the Corrugator Supercilii muscle activity (located above the eyebrows) is associated with negative affective states. In contrast, the Zygomaticus Major muscle activity (located in the cheeks) is related to positive affective states (Dimberg, 1982). Changes in facial muscle activity can occur without conscious awareness of the participant (Dimberg et al., 2000; Dimberg and Thunberg, 2012). However, it might be challenging to implement EMG in a VR system because the pressure of the Head-Mounted Display (HDM) on the electrodes can create artifacts on the recorded signal.

Asymmetry in the cortical activity of the frontal cortex is also associated with valence. It has been found that positive and negative emotions are processed in the left and right frontal cortex, respectively (Ray and Cole, 1985; Huster et al., 2009; Antons et al., 2014). Additionally, it has been found that cortical activity decreases as the alpha power (frequencies between 8 and 13 Hz) increases (Pfurtscheller and Lopes da Silva, 1999). Therefore, increased processing of positive stimuli is associated with decreased alpha power in the left frontal cortex (higher activity in the left side of the brain). Similarly, increased processing of negative stimuli is associated with decreased alpha power in the right frontal cortex (higher activity in the right side of the brain) (Davidson, 1992; Pfurtscheller and Lopes da Silva, 1999; Huster et al., 2009).

These findings are coherent with results obtained by Reuderink et al. (2013) in a study where the brain activity of video game players was recorded using EEG. Participants were asked to report their affective state using the SAM (Bradley and Lang, 1994) after the game session ended. Results indicated a positive correlation between self-reported valence and alpha asymmetry. Likewise, Koelstra et al. (2012) analyzed the brain activity of 32 participants who watched forty musical videos and rated their emotional reactions to each video using the SAM (Bradley and Lang, 1994). A positive correlation was found between self-reported valence and alpha power in the right occipital region of the brain.

Eye-movements and eye-blinks cause artifacts in the EEG signals and are usually reflected in the activity of the frontal region of the brain. In non-stationary VR applications, it is particularly challenging to remove artifacts caused by muscle activity, head movements, or electrical activity from the VR headset (Klug and Gramann, 2020). It is possible to remove these artifacts using Independent Component Analysis (ICA). This technique allows to identify the components of an EEG signal that are not produced by brain activity (Makeig et al., 1997). The maximum number of independent components (ICs) that can be identified using ICA depends on the number of electrodes used. For example, a recording with 32 electrodes will allow to identify up to 32 ICs. Therefore, increasing the number of electrodes might help identify the artifacts in the signal with more precision. For a complete analysis about using ICA in non-stationary and stationary settings, see Klug and Gramann (2020).

An additional challenge is to process the EEG signals in real-time. ICA can be used in real-time (Pion-Tonachini et al., 2015), but it was not designed for that purpose. An alternative is Artifact Subspace Reconstruction (ASR) (Mullen et al., 2015; Blum S et al., 2019), a technique designed for online artifact removal. ASR uses data recorded from the user as a baseline. Then, principal component analysis (PCP) is applied to identify the EEG channels that contain artifacts. The data of the corrupted channels are reconstructed using the baseline data as a reference. There is software available that can facilitate the implementation of ASR, such as BCILAB (Kothe and Makeig, 2013), OpenBiVE (Renard et al., 2010) and Neuropype (Intheon Labs, California).

Brain-Computer Interfaces

The implementation of electrophysiological signals in VR systems leads to the development of interfaces that allow interpreting users’ brain activity as computer commands (Wolpaw et al., 2002). One of the basic assumptions underlying the development of Brain-Computer Interfaces (BCIs) is that mental processes originate in the brain. But there are BCIs that measure electrophysiological responses in other places of the body (e.g., Cassani et al., 2018), such as the heart and face, because processes that originate in the brain can produce changes in the activity of other body parts.

There are different techniques for measuring brain activity that can be used for the development of BCIs. For example, electrocorticography (ECoG), Positron Emission Tomography (PET), and functional Magnetic Resonance Imaging (fMRI), among others. However, electroencephalography (EEG) is the method most frequently used in BCIs because 1) it provides high temporal resolution (i.e., relatively large amount of data points recorded per second); 2) does not create health risks for the user because the electrodes can be easily placed and removed from the scalp; 3) can be portable, which is important for applications where the user is moving; and 4) is less expensive than most of the other methods (Zander and Kothe, 2011).

According to Zander and Kothe (2011), there are three types of BCIs: active, passive, and reactive. Active BCIs require the active participation of the user to generate an action. For example, patients who lack motor control can use mental commands to move a wheelchair (Voznenko et al., 2018). Passive BCIs do not require the conscious involvement of the user. They can be used, for example, to analyze the cognitive load of car drivers automatically (Almahasneh et al., 2014). Reactive BCIs use mental activity that occurs as a response to external stimuli. An example is a neurofeedback video game where threatening stimuli are presented, and players have to control their anxiety to obtain game score (Schoneveld et al., 2016). A VR application for affective visualization, would usually involve either a passive or a reactive BCI.

The typical workflow in a BCI involves at least four steps (Zander and Kothe, 2011; Antons et al., 2014):

1) Preprocessing pipeline: Filter out the signal’s noise and keep only the components that reflect brain activity. This process involves (but is not limited to) filtering frequency bands and removing artifacts caused by eye-movements or muscle activity. An introduction to signal processing can be found in Unpingco (2014).

2) Feature extraction: Isolate the information related to the psychological construct of interest based on previous neuroscience studies (see Section Electrophysiology).

3) Classifier definition: A classification model is created using prerecorded data. The classifier is tested offline, and an estimate of the accuracy of the classification is calculated. In general, classifiers are trained using data that has been previously labeled by humans. Machine Learning algorithms are used to identify patterns in the data that tend to be associated with each label.

4) Classification application: The classification is implemented in the BCI to perform online analysis of the brain activity. The outputs of the classification are used as computer commands.

Practical Considerations

This section contains five practical considerations that might help during the development of a VR system for affective visualization.

1) Which are the initial steps for designing a virtual environment? First, define who will use the virtual environment (target group) and what the user will do inside that environment. This will help to have a more clear idea about the interaction events that will occur during the experience. Look for other interactive experiences, such as games and art installations, that can serve as inspiration. This will trigger ideas and will help to understand how to implement them. Then, define the graphical layout of your environment (color palette, typographies, and textures).

2) Which software should I use for VR development? Unity is probably one of the best options. There are alternatives, such as Vizard, a virtual reality software for research. However, to the extent of our knowledge, Unity is the only game engine compatible with open-source solutions, such as LSL and Excite-O-Meter. Therefore, it is relatively easy and inexpensive to develop virtual environments that rely on the user’s physiological data using Unity.

3) How to integrate Unity with electrophysiological equipment? One possibility is to use LabStreamingLayer (LSL), a tool for collecting time-series data in experimental settings. Essentially, LSL allows to collect and synchronize the data and stream it into Unity. At the same time, it allows to send data from Unity (e.g., markers) to the signal processing software. Another option is to setup a UDP Broadcast to send information through your network.

4) Is there a ready-to-use solution for integrating Unity with electrophysiological equipment? Yes. Excite-O-Meter (Gaebler et al., 2021) is a Unity plugin for visualizing cardiovascular activity, which is built on top of LSL. It can be used to visualize Heart Rate Variability (HRV) (see SectionElectrophysiology). By default, the Excite-O-Meter provides a time-series graph of the data. But you can customize it to build other types of visualizations.

5) How to define the sampling rate for recording electrophysiological signals? According the Nyquist-Shannon sampling theorem, the sampling rate should be twice the maximum frequency of interest. For example, if you are interested in frequencies of up to 128 Hz, you should use a sampling rate of at least 256 Hz. A sampling rate of 256 Hz means that you are collecting 256 data points per second.

6) The usable information for each type of signal is located in a different frequency range. Therefore, the maximum frequency of interest for each signal is different. For example, the usable information in an ECG signal is up to 100 Hz. Therefore, the sampling frequency for ECG signals should be at least 200 Hz. However, previous studies indicate that ECG recordings at 200 Hz contain noise in the high-frequency components (Malik et al., 1996). This noise can be reduced by recording at a higher sampling rate. Therefore, it is considered a good practice to record ECG signals at a sampling rate between 256 and 512 Hz, EMG signals at a sampling rate between 512 and 1024 Hz, and EEG signals at a sampling rate between 256 and 512 Hz.

Discussion

This manuscript aims to understand how to develop VR systems for affective visualization. These systems would involve the development of at least two components: a virtual environment and an affect detection technique. The development of both components requires the understanding of theories related to emotion and affect. Therefore, the manuscript analyses previous research related to 1) theories of emotion and affect, 2) audio-visual cues associated with affective states, and 3) methods for assessment of affective states.

Studies discussed in Section Visual and Sound Cues suggest that specific visual and sound cues can represent users’ emotions. However, most of these studies were conducted in experimental settings where the stimuli were carefully controlled. It is unclear whether the same psychological responses would occur if a combination of these cues were used simultaneously. For example, a particular combination of colors associated with positive states may result in an unbalanced visual composition that produces negative affective states. Or there might be motion patterns that are more prone to produce motion sickness in VR users, triggering negative states. Moreover, the novelty of a VR system in new users might bias the emotions they associate with the audio-visual stimuli.

Other studies mentioned in Section Visual and Sound Cues suggest that leftwards linear motion tends to be associated with negative valence (Lockyer et al., 2011; Feng et al., 2014). This finding was obtained during experiments conducted in a western society, where time is represented as a progression to the right (Fuhrman and Boroditsky, 2010). Therefore, it is likely that western users associate leftward motion with negative affective states because that type of motion is culturally associated with regressing in time. However, in other cultures, such as the Hebrew culture, people represent time as a progression to the left (Fuhrman and Boroditsky, 2010). Therefore, it is possible that Hebrew users would associate leftward linear motion with positive valence.

Recent studies have demonstrated that affective states can be elicited by triggering psychogenic shivering (PS) (Haar et al., 2020; Schoeller, et al., 2019a), using a device that controls the temperature in the upper back of the participants. Additional research indicates that the ability to be empathetic with others’ emotions can be influenced by delivering electrical stimulation in the vagus nerve (Colzato et al., 2017), and by inducing affective states in the observer through videos (Pinilla et al., 2020). It remains an open question how to use those findings to develop Mixed Reality (MR) technologies for empathy enhancement, as proposed by Schoeller, et al. (2019b).

Most of the existing techniques for inferring affective states from electrophysiological signals are based on a small number of discrete states e.g., (Harischandra and Perera, 2012; Mavridou et al., 2017). But the amount of distinct affective states that can be detected using this approach is limited. Therefore, it might be convenient to formulate affect detection problems in terms of statistical regression. This approach would allow creating a model capable of describing affective states in terms of a continuum containing an infinite amount of distinct affective states. Previous studies suggest that it is possible to infer arousal from EEG signals (Hofmann et al., 2018) as a continuous variable. Future studies could investigate whether it is possible to use a similar approach to express valence in terms of a continuous variable.

Finally, it is possible to use a programmatic approach to create virtual reality content in real-time, using procedural content generation (PCG) (Yannakakis and Togelius, 2011; Raffe et al., 2015; Bermudez i Badia et al., 2019; Semertzidis et al., 2020). PCG allows to create content dynamically that adjusts to user feedback. Electrophysiological signals could be used to capture user feedback without interrupting the VR experience. This approach would allow to create personalized virtual environments for emotion visualization, similar to Kitson et al. (2019) or Bermudez i Badia et al., (2019).

Author Contributions

All authors contributed to the study conception and analysis. JG and WR contributed with analysis of data related to gaming and Virtual Reality. J-NV-A contributed to the analysis of data related to psychology and electrophysiology. RS contributed with the redaction of the manuscript and with data related to electrocardiography and eye-tracking. SM contributed to the analysis of data related to sound design and Machine Learning. AP performed the literature search, data analysis and wrote the first draft. All authors commented on previous versions of the manuscript.

Funding

This work was supported by the strategic partnership between the Technische Universität Berlin, Germany and the University of Technology Sydney, Australia.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We acknowledge the support of the German Research Foundation and the Open Access Publication Fund of TU Berlin. We appreciate the generosity of Nick Busietta in giving us access to the Psydocs of LiminalVR (liminalvr.com). That documentation was crucial for writing Section 2. Visual and Sound Cues of this manuscript.

Abbreviations

HCI, Human Computer Interaction; VR, Virtual Reality; fMRI, functional Magnetic Brain Imaging; EEG, electroencephalography; OCC theory of emotions, Ortony, Clore and Collins; ESM, Evaluative Space Model; PANAS, Positive and Negative Affect Schedule; SAM, Self-Assessment Manikin; PAM, Pick a Mood; IAPS; International Affective Pictures System; FACS, Facial Action Coding System; AU, Action Unit; HMD, Head–Mounted Display; EMG, Electromyography; ECG, electrocardiography; EEG, electroencephalography; RRI, RR-Intervals; HRV, heart rate variability; RMSSD, root mean square of successive differences; SDNN, standard deviation of NN intervals; LF/HF ratio, Low Frequency/High Frequency ratio; LSTMRNN, long short-term memory recurrent neural networks; ICA, Independent Component Analysis; ICs, independent components; ASR, Artifact Subspace Reconstruction; PCP, principal component analysis; BCIs, Brain–Computer Interfaces; ECoG, electrocorticography; PET, Positron Emission Tomography.

References

Antons, J.-N., Arndt, S., Schleicher, R., and Möller, S. (2014). “Brain Activity Correlates of Quality of Experience,” in Quality of Experience. Editors S. Möller, and A. Raake (Springer International Publishing), 109–119. doi:10.1007/978-3-319-02681-7_8

CrossRef Full Text | Google Scholar

Arndt, S., Perkis, A., and Voigt-Antons, J.-N. (2018). Using Virtual Reality and Head-Mounted Displays to Increase Performance in Rowing Workouts. Proc. 1st Int. Workshop Multimedia Content Anal. Sports - MMSports’18, 45–50. doi:10.1145/3265845.3265848

CrossRef Full Text | Google Scholar

Aronoff, J. (2006). How We Recognize Angry and Happy Emotion in People, Places, and Things. Cross-Cultural Res. 40 (1), 83–105. doi:10.1177/1069397105282597

CrossRef Full Text | Google Scholar

Aronoff, J., Woike, B. A., and Hyman, L. M. (1992). Which Are the Stimuli in Facial Displays of Anger and Happiness? Configurational Bases of Emotion Recognition. J. Personal. Soc. Psychol. 62 (6), 1050–1066. doi:10.1037/0022-3514.62.6.1050

CrossRef Full Text | Google Scholar

Banse, R., and Scherer, K. R. (1996). Acoustic Profiles in Vocal Emotion Expression. J. Personal. Soc. Psychol. 70 (3), 614–636. doi:10.1037/0022-3514.70.3.614

PubMed Abstract | CrossRef Full Text | Google Scholar

Bar, M., and Neta, M. (2006). Humans Prefer Curved Visual Objects. Psychol. Sci. 17 (8), 645–648. doi:10.1111/j.1467-9280.2006.01759.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bard, P. (1934). On Emotional Expression after Decortication with Some Remarks on Certain Theoretical Views: Part I. Psychol. Rev. 41 (4), 309–329. doi:10.1037/h0070765

CrossRef Full Text | Google Scholar

Barrett, L. F., and Bliss‐Moreau, E. (2009). “Chapter 4 Affect as a Psychological Primitive,” in Advances in Experimental Social Psychology (Elsevier), 41, 167–218. doi:10.1016/S0065-2601(08)00404-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Bartram, L., Patra, A., and Stone, M. (2017). Affective Color in Visualization. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 1364–1374. doi:10.1145/3025453.3026041

CrossRef Full Text | Google Scholar

Belger, J., Krohn, S., Finke, C., Tromp, J., Klotzsche, F., Villringer, A., et al. (2019). “Immersive Virtual Reality for the Assessment and Training of Spatial Memory: Feasibility in Individuals with Brain Injury,” in 2019 International Conference on Virtual Rehabilitation. Tel Aviv, Israel: ICVR, 1–2. doi:10.1109/ICVR46560.2019.8994342

CrossRef Full Text | Google Scholar

Bermudez i Badia, S., Quintero, L. V., Cameirao, M. S., Chirico, A., Triberti, S., Cipresso, P., et al. (2019). Toward Emotionally Adaptive Virtual Reality for Mental Health Applications. IEEE J. Biomed. Health Inform. 23 (5), 1877–1887. doi:10.1109/JBHI.2018.2878846

PubMed Abstract | CrossRef Full Text | Google Scholar

Blandón, D. Z., Muñoz, J. E., Lopez, D. S., and Gallo, O. H. (2016). “Influence of a BCI Neurofeedback Videogame in Children with ADHD. Quantifying the Brain Activity through an EEG Signal Processing Dedicated Toolbox,” in 2016 IEEE 11th Colombian Computing Conference. Popayań, Colombia: CCC, 1–8. doi:10.1109/ColumbianCC.2016.7750788

CrossRef Full Text | Google Scholar

Blum, J., Rockstroh, C., and Göritz, A. S. (2020). Development and Pilot Test of a Virtual Reality Respiratory Biofeedback Approach. Appl. Psychophysiol Biofeedback 45 (3), 153–163. doi:10.1007/s10484-020-09468-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Blum, J., Rockstroh, C., and Göritz, A. S. (2019). Heart Rate Variability Biofeedback Based on Slow-Paced Breathing with Immersive Virtual Reality Nature Scenery. Front. Psychol. 10, 2172. doi:10.3389/fpsyg.2019.02172

PubMed Abstract | CrossRef Full Text | Google Scholar

Blum, S., Jacobsen, N. S. J., Bleichner, M. G., and Debener, S. (2019). A Riemannian Modification of Artifact Subspace Reconstruction for EEG Artifact Handling. Front. Hum. Neurosci. 13. doi:10.3389/fnhum.2019.00141

PubMed Abstract | CrossRef Full Text | Google Scholar

Bradley, M. M., and Lang, P. J. (1994). Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry 25 (1), 49–59. doi:10.1016/0005-7916(94)90063-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Bull, P. (1978). The Interpretation of Posture through an Alternative Methodology to Role Play. Br. J. Soc. Clin. Psychol. 17 (1), 1–6. doi:10.1111/j.2044-8260.1978.tb00888.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Burdea, G. C., Cioi, D., Kale, A., Janes, W. E., Ross, S. A., and Engsberg, J. R. (2013). Robotics and Gaming to Improve Ankle Strength, Motor Control, and Function in Children with Cerebral Palsy-A Case Study Series. IEEE Trans. Neural Syst. Rehabil. Eng. 21 (2), 165–173. doi:10.1109/TNSRE.2012.2206055

PubMed Abstract | CrossRef Full Text | Google Scholar

Cacioppo, J. T., Gardner, W. L., and Berntson, G. G. (1997). Beyond Bipolar Conceptualizations and Measures: The Case of Attitudes and Evaluative Space. Pers Soc. Psychol. Rev. 1 (1), 3–25. doi:10.1207/s15327957pspr0101_2

PubMed Abstract | CrossRef Full Text | Google Scholar

Camgöz, N., Yener, C., and Güvenç, D. (2002). Effects of Hue, Saturation, and Brightness on Preference. Color Res. Appl. 27 (3), 199–207. doi:10.1002/col.10051

CrossRef Full Text | Google Scholar

Cannon, W. B. (1927). The James-Lange Theory of Emotions: A Critical Examination and an Alternative Theory. Am. J. Psychol. 39 (1/4), 106. doi:10.2307/1415404

CrossRef Full Text | Google Scholar

Cassani, R., Moinnereau, M.-A., and Falk, T. H. (2018). “A Neurophysiological Sensor-Equipped Head-Mounted Display for Instrumental QoE Assessment of Immersive Multimedia,” in 2018 Tenth International Conference on Quality of Multimedia Experience. Sardinia, Italy: QoMEX, 1–6. doi:10.1109/QoMEX.2018.8463422

CrossRef Full Text | Google Scholar

Cavazza, M., Aranyi, G., Charles, F., Porteous, J., Gilroy, S., Klovatch, I., et al. (2014). Towards Empathic Neurofeedback For Interactive Storytelling Application/pdf, 19. doi:10.4230/OASICS.CMN.2014.42

CrossRef Full Text

C. G. Lange, and W. James (1922). The Emotions. (Baltimore, Maryland: Williams & Wilkins), Vol. 1. doi:10.1037/10735-000

CrossRef Full Text

Colzato, L. S., Sellaro, R., and Beste, C. (2017). Darwin Revisited: The Vagus Nerve Is a Causal Element in Controlling Recognition of Other's Emotions. Cortex 92, 95–102. doi:10.1016/j.cortex.2017.03.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Conati, C., and Zhou, X. (2002). “Modeling Students' Emotions from Cognitive Appraisal in Educational Games,” in Intelligent Tutoring Systems. Editors S. A. Cerri, G. Gouardères, and F. Paraguaçu (Springer Berlin Heidelberg), 944–954. doi:10.1007/3-540-47987-2_94

CrossRef Full Text | Google Scholar

Cordaro, D. T., Keltner, D., Tshering, S., Wangchuk, D., and Flynn, L. M. (2016). The Voice Conveys Emotion in Ten Globalized Cultures and One Remote Village in Bhutan. Emotion 16 (1), 117–128. doi:10.1037/emo0000100

PubMed Abstract | CrossRef Full Text | Google Scholar

Cosmides, L., and Tooby, J. (1994). “Origins of Domain Specificity: The Evolution of Functional Organization,” in Mapping the Mind. Editors L. A. Hirschfeld, and S. A. Gelman. 1st ed. (Cambridge University Press), 85–116. doi:10.1017/CBO9780511752902.005

CrossRef Full Text | Google Scholar

Darwin, C. (1872). The Expression of the Emotions in Man and Animals. John Murray. doi:10.1037/10001-000

CrossRef Full Text | Google Scholar

Davidson, R. J. (1992). Emotion and Affective Style: Hemispheric Substrates. Psychol. Sci. 3 (1), 39–43. doi:10.1111/j.1467-9280.1992.tb00254.x

CrossRef Full Text | Google Scholar

Desmet, P. M. A. (2015). Design for Mood: Twenty Activity-Based Opportunities to Design for Mood Regulation. Int. J. Des. 9 (2), 2015

Google Scholar

Desmet, P. M. A., Romero, N., and Vastenburg, M. H. (2016). Mood Measurement with Pick-A-Mood: Review of Current Methods and Design of a Pictorial Self-Report Scale. Jdr 14 (3), 241. doi:10.1504/JDR.2016.07975110.1504/jdr.2016.10000563Vastenburg

CrossRef Full Text | Google Scholar

Dimberg, U. (1982). Facial Reactions to Facial Expressions. Psychophysiology 19 (6), 643–647. doi:10.1111/j.1469-8986.1982.tb02516.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Dimberg, U., Thunberg, M., and Elmehed, K. (2000). Unconscious Facial Reactions to Emotional Facial Expressions. Psychol. Sci. 11 (1), 86–89. doi:10.1111/1467-9280.00221

PubMed Abstract | CrossRef Full Text | Google Scholar

Dimberg, U., and Thunberg, M. (2012). Empathy, Emotional Contagion, and Rapid Facial Reactions to Angry and Happy Facial Expressions. PsyCh J. 1 (2), 118–127. doi:10.1002/pchj.4

PubMed Abstract | CrossRef Full Text | Google Scholar

Drossos, K., Floros, A., Giannakoulopoulos, A., and Kanellopoulos, N. (2015). Investigating the Impact of Sound Angular Position on the Listener Affective State. IEEE Trans. Affective Comput. 6 (1), 27–42. doi:10.1109/TAFFC.2015.2392768

CrossRef Full Text | Google Scholar

Ebe, Y., and Umemuro, H. (2015). “Emotion Evoked by Texture and Application to Emotional Communication,” in Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, 1995–2000. doi:10.1145/2702613.2732768

CrossRef Full Text | Google Scholar

Eiben, A. E., and Smith, J. E. (2015). Introduction to Evolutionary Computing. Springer, 2.

Ekman, P. (2006). Darwin and Facial Expression: A century of Research in Review. Malor Books. Cambridge, MA: Ishk.

Editor Ekman, P., and Friesen, W. V. (1971). Constants across Cultures in the Face and Emotion. J. Personal. Soc. Psychol. 17 (2), 124–129. doi:10.1037/h0030377

PubMed Abstract | CrossRef Full Text | Google Scholar

Ekman, P., and Friesen, W. V. (1976). Measuring Facial Movement. Environ. Psychol. Nonverbal Behav. 1 (1), 56–75. doi:10.1007/BF01115465

CrossRef Full Text | Google Scholar

Feffer, M., Rudovic, O., Oggi), , and Picard, R. W. (2018). “A Mixture of Personalized Experts for Human Affect Estimation,” in Machine Learning and Data Mining in Pattern Recognition. Editor P. Perner (Springer International Publishing), 316–330.

CrossRef Full Text | Google Scholar

Feng, C., Bartram, L., and Riecke, B. E. (2014). “Evaluating Affective Features of 3D Motionscapes,” in Proceedings of the ACM Symposium on Applied Perception, ’14. Vancouver, Canada: SAP, 23–30. doi:10.1145/2628257.2628264

PubMed Abstract | CrossRef Full Text | Google Scholar

Fernández-Sotos, A., Fernández-Caballero, A., and Latorre, J. M. (2016). Influence of Tempo and Rhythmic Unit in Musical Emotion Regulation. Front. Comput. Neurosci. 10. 80. doi:10.3389/fncom.2016.00080

PubMed Abstract | CrossRef Full Text | Google Scholar

Fisher, R. J. (1993). Social Desirability Bias and the Validity of Indirect Questioning. J. Consum Res. 20 (2), 303. doi:10.1086/209351

CrossRef Full Text | Google Scholar

Fuhrman, O., and Boroditsky, L. (2010). Cross-Cultural Differences in Mental Representations of Time: Evidence from an Implicit Nonlinguistic Task. Cogn. Sci. 34 (8), 1430–1451. doi:10.1111/j.1551-6709.2010.01105.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Gaebler, M., Muñoz, J. E., de Mooji, J., Quintero, L. E., Tromp, J., Klotzsche, F., et al. (2021). Excite-O-Meter. Available at: https://www.exciteometer.eu/. (Accessed July 28, 2021).

Gao, X.-P., Xin, J. H., Sato, T., Hansuebsai, A., Scalzo, M., Kajiwara, K., et al. (2007). Analysis of Cross-Cultural Color Emotion. Color Res. Appl. 32 (3), 223–229. doi:10.1002/col.20321

CrossRef Full Text | Google Scholar

Garcia, J. A., and Navarro, K. F. (2014). “The Mobile RehAppTM: An AR-based mobile Game for Ankle Sprain Rehabilitation,” in 2014 IEEE 3nd International Conference on Serious Games and Applications for Health. Rio de Janeiro, Brazil: SeGAH, 1–6. doi:10.1109/SeGAH.2014.7067087

CrossRef Full Text | Google Scholar

Georgiou, T., and Demiris, Y. (2017). Adaptive User Modelling in Car Racing Games Using Behavioural and Physiological Data. User Model. User-adap Inter. 27 (2), 267–311. doi:10.1007/s11257-017-9192-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Gerardi, G. M., and Gerken, L. (1995). The Development of Affective Responses to Modality and Melodic Contour. Music Perception: Interdiscip. J. 12 (3), 279–290. doi:10.2307/40286184

CrossRef Full Text | Google Scholar

Greinacher, R., Kojic, T., Meier, L., Parameshappa, R. G., Moller, S., and Voigt-Antons, J.-N. (2020). “Impact of Tactile and Visual Feedback on Breathing Rhythm and User Experience in VR Exergaming,” in 2020 Twelfth International Conference on Quality of Multimedia Experience. Athlone, Ireland: QoMEX, 1–6. doi:10.1109/QoMEX48832.2020.9123141

CrossRef Full Text | Google Scholar

Greinacher, R., and Voigt-Antons, J.-N. (2020). “Accuracy Assessment of ARKit 2 Based Gaze Estimation,” in Human-Computer Interaction. Design and User Experience. Editor M. Kurosu (Springer International Publishing), Vol. 12181, 439–449. doi:10.1007/978-3-030-49059-1_32

CrossRef Full Text | Google Scholar

Haar, A. J. H., Jain, A., Schoeller, F., and Maes, P. (2020). Augmenting Aesthetic Chills Using a Wearable Prosthesis Improves Their Downstream Effects on Reward and Social Cognition. Sci. Rep. 10 (1), 21603. doi:10.1038/s41598-020-77951-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Harischandra, J., and Perera, M. U. S. (2012). “Intelligent Emotion Recognition System Using Brain Signals (EEG),” in 2012 IEEE-EMBS Conference on Biomedical Engineering and Sciences, 454–459. doi:10.1109/IECBES.2012.6498050

CrossRef Full Text | Google Scholar

Hernandez, J., Li, Y., Rehg, J. M., and Picard, R. W. (2014). “BioGlass: Physiological Parameter Estimation Using a Head-Mounted Wearable Device,” in 2014 4th International Conference on Wireless Mobile Communication and Healthcare - Transforming Healthcare Through Innovations in Mobile and Wireless Technologies. Athens, Greece: MOBIHEALTH), 55–58. doi:10.1109/MOBIHEALTH.2014.701590810.4108/icst.mobihealth.2014.257219

CrossRef Full Text | Google Scholar

Hofmann, S. M., Klotzsche, F., Mariola, A., Nikulin, V. V., Villringer, A., and Gaebler, M. (2018). “Decoding Subjective Emotional Arousal during a Naturalistic VR Experience from EEG Using LSTMs,” in 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality. (Taichung, Taiwan: AIVR), 128–131. doi:10.1109/AIVR.2018.00026

CrossRef Full Text | Google Scholar

Hoppe, S., Loetscher, T., Morey, S. A., and Bulling, A. (2018). Eye Movements during Everyday Behavior Predict Personality Traits. Front. Hum. Neurosci. 12, 105. doi:10.3389/fnhum.2018.00105

PubMed Abstract | CrossRef Full Text | Google Scholar

Hupont, I., Gracia, J., Sanagustin, L., and Gracia, M. A. (2015). “How Do New Visual Immersive Systems Influence Gaming QoE? A Use Case of Serious Gaming with Oculus Rift,” in 2015 Seventh International Workshop on Quality of Multimedia Experience. Costa Navarino, Greece: QoMEX, 1–6. doi:10.1109/QoMEX.2015.7148110

CrossRef Full Text | Google Scholar

Huster, R. J., Stevens, S., Gerlach, A. L., and Rist, F. (2009). A Spectralanalytic Approach to Emotional Responses Evoked through Picture Presentation. Int. J. Psychophysiology 72 (2), 212–216. doi:10.1016/j.ijpsycho.2008.12.009

CrossRef Full Text | Google Scholar

Jaques, P. A., and Vicari, R. M. (2007). A BDI Approach to Infer Student's Emotions in an Intelligent Learning Environment. Comput. Edu. 49 (2), 360–384. doi:10.1016/j.compedu.2005.09.002

CrossRef Full Text | Google Scholar

Kapur, A., Kapur, A., Virji-Babul, N., Tzanetakis, G., and Driessen, P. F. (2005). “Gesture-Based Affective Computing on Motion Capture Data,” in Affective Computing and Intelligent Interaction. Editors J. Tao, T. Tan, and R. W. Picard (Springer Berlin Heidelberg), 1–7. doi:10.1007/11573548_1

CrossRef Full Text | Google Scholar

Kitson, A., DiPaola, S., and Riecke, B. E. (2019). “Lucid Loop,” in Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6. doi:10.1145/3290607.3312952

CrossRef Full Text | Google Scholar

Klug, M., and Gramann, K. (2020). Identifying Key Factors for Improving ICA‐based Decomposition of EEG Data in mobile and Stationary Experiments. Eur. J. Neurosci.. doi:10.1111/ejn.14992

CrossRef Full Text | Google Scholar

Koelstra, S., Muhl, C., Soleymani, M., Jong-Seok Lee, Lee., Yazdani, A., Ebrahimi, T., et al. (2012). DEAP: A Database for Emotion Analysis ; Using Physiological Signals. IEEE Trans. Affective Comput. 3 (1), 18–31. doi:10.1109/T-AFFC.2011.15

CrossRef Full Text | Google Scholar

Koenig, S. T., Crucian, G. P., Duenser, A., Bartneck, C., and Dalrymple-Alford, J. C. (2011). Validity Evaluation of a Spatial Memory Task in Virtual Environments

Kothe, C. A., and Makeig, S. (2013). BCILAB: a Platform for Brain-Computer Interface Development. J. Neural Eng. 10 (5), 056014. doi:10.1088/1741-2560/10/5/056014

PubMed Abstract | CrossRef Full Text | Google Scholar

Lang, P. J., Bradley, M. M., and Cuthbert, B. N. (2008). International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual. Gainesville, FL: University of Florida. Technical Report A-8

Leslie, G., Picard, R., and Lui, S. (2015). An EEG and Motion Capture Based Expressive Music Interface for Affective Neurofeedback. doi:10.13140/RG.2.1.4378.6081

CrossRef Full Text

Li, Z., Tong, L., Wang, L., Li, Y., He, W., Guan, M., et al. (2016). Self-regulating Positive Emotion Networks by Feedback of Multiple Emotional Brain States Using Real-Time fMRI. Exp. Brain Res. 234 (12), 3575–3586. doi:10.1007/s00221-016-4744-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Lipson-Smith, R., Bernhardt, J., Zamuner, E., Churilov, L., Busietta, N., and Moratti, D. (2020). Exploring Colour in Context Using Virtual Reality: Does a Room Change How You Feel?. Virtual Reality. doi:10.1007/s10055-020-00479-x

CrossRef Full Text | Google Scholar

Lockyer, M., Bartram, L., and Riecke, B. E. (2011). Simple Motion Textures for Ambient Affect. Comput. Aesthetics Graphics, Visualization, 8. doi:10.2312/COMPAESTH/COMPAESTH11/089-09610.1145/2030441.2030461

CrossRef Full Text | Google Scholar

Lucassen, M. P., Gevers, T., and Gijsenij, A. (2011). Texture Affects Color Emotion. Color Res. Appl. 36 (6), 426–436. doi:10.1002/col.20647

CrossRef Full Text | Google Scholar

Makeig, S., Jung, T.-P., Bell, A. J., Ghahremani, D., and Sejnowski, T. J. (1997). Blind Separation of Auditory Event-Related Brain Responses into Independent Components. Proc. Natl. Acad. Sci. 94 (20), 10979–10984. doi:10.1073/pnas.94.20.10979

PubMed Abstract | CrossRef Full Text | Google Scholar

Malik, M., Bigger, J. T., Camm, A. J., Kleiger, R. E., Malliani, A., Moss, A. J., et al. (1996). Heart Rate Variability: Standards of Measurement, Physiological Interpretation, and Clinical Use. Eur. Heart J. 17 (3), 354–381. doi:10.1093/oxfordjournals.eurheartj.a014868

PubMed Abstract | CrossRef Full Text | Google Scholar

Martínez-Tejada, L., Puertas-González, A., Yoshimura, N., and Koike, Y. (2021). Exploring EEG Characteristics to Identify Emotional Reactions under Videogame Scenarios. Brain Sci. 11 (3), 378. doi:10.3390/brainsci11030378

PubMed Abstract | CrossRef Full Text | Google Scholar

Mattek, A. M., Wolford, G. L., and Whalen, P. J. (2017). A Mathematical Model Captures the Structure of Subjective Affect. Perspect. Psychol. Sci. 12 (3), 508–526. doi:10.1177/1745691616685863

PubMed Abstract | CrossRef Full Text | Google Scholar

Mavridou, I., McGhee, J. T., Hamedi, M., Fatoorechi, M., Cleal, A., Ballaguer-Balester, E., et al. (2017). “FACETEQ Interface Demo for Emotion Expression in VR,” in 2017 IEEE Virtual Reality (Los Angeles, California: VR), 441–442. doi:10.1109/VR.2017.7892369

CrossRef Full Text | Google Scholar

McDuff, D., Kaliouby, R. E., and Picard, R. W. (2012). Crowdsourcing Facial Responses to Online Videos. IEEE Trans. Affective Comput. 3 (4), 456–468. doi:10.1109/T-AFFC.2012.19

CrossRef Full Text | Google Scholar

Mullen, T. R., Kothe, C. A. E., Chi, Y. M., Ojeda, A., Kerth, T., Makeig, S., et al. (2015). Real-time Neuroimaging and Cognitive Monitoring Using Wearable Dry EEG. IEEE Trans. Biomed. Eng. 62 (11), 2553–2567. doi:10.1109/TBME.2015.2481482

PubMed Abstract | CrossRef Full Text | Google Scholar

Norman, J. F., Beers, A., and Phillips, F. (2010). Fechner's Aesthetics Revisited. Seeing and Perceiving 23 (3), 263–271. doi:10.1163/187847510X516412

PubMed Abstract | CrossRef Full Text | Google Scholar

Ortony, A., Clore, G. L., and Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge University Press.

Pagani, M., Lombardi, F., Guzzetti, S., Sandrone, G., Rimoldi, O., Malfatto, G., et al. (1984). Power Spectral Density of Heart Rate Variability as an index of Sympatho-Vagal Interaction in normal and Hypertensive Subjects. J. Hypertens. Suppl. 2 (3), S383–S385.

PubMed Abstract | Google Scholar

Palmer, S. E., and Schloss, K. B. (2010). An Ecological Valence Theory of Human Color Preference. Proc. Natl. Acad. Sci. 107 (19), 8877–8882. doi:10.1073/pnas.0906172107

PubMed Abstract | CrossRef Full Text | Google Scholar

Peperkorn, H. M., Alpers, G. W., and Mühlberger, A. (2014). Triggers of Fear: Perceptual Cues versus Conceptual Information in Spider Phobia. J. Clin. Psychol. 70 (7), 704–714. doi:10.1002/jclp.22057

PubMed Abstract | CrossRef Full Text | Google Scholar

Perkis, A., Timmerer, C., Baraković, S., Husić, J. B., Bech, S., Bosse, S., et al. (2020). QUALINET White Paper on Definitions of Immersive Media Experience (IMEx).

Petri, T. (2009). Exploring Relationships between Audio Features and Emotion in Music. Front. Hum. Neurosci. 3. doi:10.3389/conf.neuro.09.2009.02.033

CrossRef Full Text | Google Scholar

Pfurtscheller, G., and Lopes da Silva, F. H. (1999). Event-related EEG/MEG Synchronization and Desynchronization: Basic Principles. Clin. Neurophysiol. 110 (11), 1842–1857. doi:10.1016/S1388-2457(99)00141-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Picard, R. W., Vyzas, E., and Healey, J. (2001). Toward Machine Emotional Intelligence: Analysis of Affective Physiological State. IEEE Trans. Pattern Anal. Machine Intell. 23 (10), 1175–1191. doi:10.1109/34.954607

CrossRef Full Text | Google Scholar

Pinilla, A., Tamayo, R. M., and Neira, J. (2020). How Do Induced Affective States Bias Emotional Contagion to Faces? A Three-Dimensional Model. Front. Psychol. 11. doi:10.3389/fpsyg.2020.00097

PubMed Abstract | CrossRef Full Text | Google Scholar

Pion-Tonachini, L., Sheng-, H. H., Makeig, S., Tzyy-Ping Jung, T. P., and Cauwenberghs, G. (2015). “Real-time EEG Source-Mapping Toolbox (REST): Online ICA and Source Localization,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Milano, Italy: EMBC, 4114–4117. doi:10.1109/EMBC.2015.7319299

PubMed Abstract | CrossRef Full Text | Google Scholar

Piwek, L., Pollick, F., and Petrini, K. (2015). Audiovisual Integration of Emotional Signals from Others' Social Interactions. Front. Psychol. 9. doi:10.3389/fpsyg.2015.00611

CrossRef Full Text | Google Scholar

Plutchik, R. (1982). A Psychoevolutionary Theory of Emotions. Soc. Sci. Inf. 21 (4–5), 529–553. doi:10.1177/053901882021004003

CrossRef Full Text | Google Scholar

Polzehl, T., Schmitt, A., Metze, F., and Wagner, M. (2011). Anger Recognition in Speech Using Acoustic and Linguistic Cues. Speech Commun. 53 (9–10), 1198–1209. doi:10.1016/j.specom.2011.05.002

CrossRef Full Text | Google Scholar

Porcu, S., Floris, A., Voigt-Antons, J.-N., Atzori, L., and Moller, S. (2020). Estimation of the Quality of Experience during Video Streaming from Facial Expression and Gaze Direction. IEEE Trans. Netw. Serv. Manage. 17, 2702–2716. doi:10.1109/TNSM.2020.3018303

CrossRef Full Text | Google Scholar

Putnam, H. (1967). “The Nature of Mental States,” in Art, Mind, and Religion. Editors W. H. Capitan, and D. D. Merrill (Pittsburgh, PA: Pittsburgh University Press), 1–223.

Google Scholar

Raffe, W. L., Zambetta, F., Li, X., and Stanley, K. O. (2015). Integrated Approach to Personalized Procedural Map Generation Using Evolutionary Algorithms. IEEE Trans. Comput. Intell. AI Games 7 (2), 139–155. doi:10.1109/TCIAIG.2014.2341665

CrossRef Full Text | Google Scholar

Ray, W., and Cole, H. (1985). EEG Alpha Activity Reflects Attentional Demands, and Beta Activity Reflects Emotional and Cognitive Processes. Science 228 (4700), 750–752. doi:10.1126/science.3992243

PubMed Abstract | CrossRef Full Text | Google Scholar

Renard, Y., Lotte, F., Gibert, G., Congedo, M., Maby, E., Delannoy, V., et al. (2010). OpenViBE: An Open-Source Software Platform to Design, Test, and Use Brain-Computer Interfaces in Real and Virtual Environments. Presence: Teleoperators and Virtual Environments 19 (1), 35–53. doi:10.1162/pres.19.1.35

CrossRef Full Text | Google Scholar

Reuderink, B., Mühl, C., and Poel, M. (2013). Valence, Arousal and Dominance in the EEG during Game Play. Ijaacs 6 (1), 45. doi:10.1504/IJAACS.2013.050691

CrossRef Full Text | Google Scholar

Robitaille, P., and McGuffin, M. J. (2019). Increased Affect-Arousal in VR Can Be Detected from Faster Body Motion with Increased Heart Rate. Proc. ACM SIGGRAPH Symp. Interactive 3D Graphics Games, 1–6. doi:10.1145/3306131.3317022

CrossRef Full Text | Google Scholar

Russell, J. A. (1980). A Circumplex Model of Affect. J. Personal. Soc. Psychol. 39 (6), 1161–1178. doi:10.1037/h0077714

CrossRef Full Text | Google Scholar

Ryali, C. K., Goffin, S., Winkielman, P., and Yu, A. J. (2020). From Likely to Likable: The Role of Statistical Typicality in Human Social Assessment of Faces. Proc. Natl. Acad. Sci. USA 117 (47), 29371–29380. doi:10.1073/pnas.1912343117

PubMed Abstract | CrossRef Full Text | Google Scholar

Schachter, S., and Singer, J. (1962). Cognitive, Social, and Physiological Determinants of Emotional State. Psychol. Rev. 69 (5), 379–399. doi:10.1037/h0046234

PubMed Abstract | CrossRef Full Text | Google Scholar

Scherer, K. R., and Oshinsky, J. S. (1977). Cue Utilization in Emotion Attribution from Auditory Stimuli. Motiv. Emot. 1 (4), 331–346. doi:10.1007/BF00992539

CrossRef Full Text | Google Scholar

Schoeller, F., Bertrand, P., Gerry, L. J., Jain, A., Horowitz, A. H., and Zenasni, F. (2019b). Combining Virtual Reality and Biofeedback to Foster Empathic Abilities in Humans. Front. Psychol. 9, 2741. doi:10.3389/fpsyg.2018.02741

PubMed Abstract | CrossRef Full Text | Google Scholar

Schoeller, F., Haar, A. J. H., Jain, A., and Maes, P. (2019a). Enhancing Human Emotions with Interoceptive Technologies. Phys. Life Rev. 31, 310–319. doi:10.1016/j.plrev.2019.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Semertzidis, N., Scary, M., Andres, J., Dwivedi, B., Kulwe, Y. C., Zambetta, F., et al. (2020). Neo-Noumena. Proc. 2020 CHI Conf. Hum. Factors Comput. Syst., 1–13. doi:10.1145/3313831.3376599

CrossRef Full Text | Google Scholar

Shiban, Y., Peperkorn, H., Alpers, G. W., Pauli, P., and Mühlberger, A. (2016). Influence of Perceptual Cues and Conceptual Information on the Activation and Reduction of Claustrophobic Fear. J. Behav. Ther. Exp. Psychiatry 51, 19–26. doi:10.1016/j.jbtep.2015.11.002

PubMed Abstract | CrossRef Full Text | Google Scholar

Shiban, Y., Reichenberger, J., Neumann, I. D., and Mã¼hlberger, A. (2015). Social Conditioning and Extinction Paradigm: A Translational Study in Virtual Reality. Front. Psychol. 6. doi:10.3389/fpsyg.2015.00400

CrossRef Full Text | Google Scholar

Shiota, M. N., and Kalat, J. W. (2012). Emotion. Second Edition. Belmont, CA: Linda Schreiber-Ganster

Sitaram, R., Lee, S., Ruiz, S., Rana, M., Veit, R., and Birbaumer, N. (2011). Real-time Support Vector Classification and Feedback of Multiple Emotional Brain States. NeuroImage 56 (2), 753–765. doi:10.1016/j.neuroimage.2010.08.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Sutherland, M. R., and Mather, M. (2012). Negative Arousal Amplifies the Effects of Saliency in Short-Term Memory. Emotion 12 (6), 1367–1372. doi:10.1037/a0027860

PubMed Abstract | CrossRef Full Text | Google Scholar

Tajadura-Jiménez, A., Larsson, P., Väljamäe, A., Västfjäll, D., and Kleiner, M. (2010b). When Room Size Matters: Acoustic Influences on Emotional Responses to Sounds. Emotion 10 (3), 416–422. doi:10.1037/a0018423

CrossRef Full Text | Google Scholar

Tajadura-Jiménez, A., Väljamäe, A., Asutay, E., and Västfjäll, D. (2010a). Embodied Auditory Perception: The Emotional Impact of Approaching and Receding Sound Sources. Emotion 10 (2), 216–229. doi:10.1037/a0018422

CrossRef Full Text | Google Scholar

Tajadura-Jiménez, A., Väljamäe, A., and Västfjäll, D. (2008). Self-Representation in Mediated Environments: The Experience of Emotions Modulated by Auditory-Vibrotactile Heartbeat. CyberPsychology Behav. 11 (1), 33–38. doi:10.1089/cpb.2007.0002

CrossRef Full Text | Google Scholar

Thayer, J. F., Hansen, A. L., Saus-Rose, E., and Johnsen, B. H. (2009). Heart Rate Variability, Prefrontal Neural Function, and Cognitive Performance: The Neurovisceral Integration Perspective on Self-Regulation, Adaptation, and Health. Ann. Behav. Med. 37 (2), 141–153. doi:10.1007/s12160-009-9101-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Unpingco, J. (2014). Python for Signal Processing: Featuring IPython Notebooks. Springer

Valdez, P., and Mehrabian, A. (1994). Effects of Color on Emotions. J. Exp. Psychol. Gen. 123 (4), 394–409. doi:10.1037/0096-3445.123.4.394

PubMed Abstract | CrossRef Full Text | Google Scholar

Vogt, T., André, E., and Bee, N. (2008). “EmoVoice—A Framework for Online Recognition of Emotions from Voice,” in Perception in Multimodal Dialogue Systems. Editors E. André, L. Dybkjær, W. Minker, H. Neumann, R. Pieraccini, and M. Weber (Springer Berlin Heidelberg), 188–199.

Google Scholar

Voigt-Antons, J.-N., Lehtonen, E., Palacios, A. P., Ali, D., Kojic, T., and Moller, S. (2020). “Comparing Emotional States Induced by 360° Videos via Head-Mounted Display and Computer Screen,” in 2020 Twelfth International Conference on Quality of Multimedia Experience. Athlone, Ireland: QoMEX, 1–6. doi:10.1109/QoMEX48832.2020.9123125

CrossRef Full Text | Google Scholar

Watson, D., Clark, L. A., and Tellegen, A. (1988). Development and Validation of Brief Measures of Positive and Negative Affect: The PANAS Scales. J. Personal. Soc. Psychol. 54 (6), 1063–1070. doi:10.1037/0022-3514.54.6.1063

PubMed Abstract | CrossRef Full Text | Google Scholar

Williams, D., Kirke, A., Miranda, E., Daly, I., Hwang, F., Weaver, J., et al. (2017). Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System. ACM Trans. Appl. Percept. 14 (3), 1–13. doi:10.1145/3059005

CrossRef Full Text | Google Scholar

Wilms, L., and Oberfeld, D. (2018). Color and Emotion: Effects of Hue, Saturation, and Brightness. Psychol. Res. 82 (5), 896–914. doi:10.1007/s00426-017-0880-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., and Vaughan, T. M. (2002). Brain-computer Interfaces for Communication and Control. Clin. Neurophysiol. 113 (6), 767–791. doi:10.1016/S1388-2457(02)00057-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Wundt, W. (1897). Outlines of Psychology. London, England: Williams and Norgate. C. H. Judd, Trans. doi:10.1037/12908-000

CrossRef Full Text

Yannakakis, G. N., and Togelius, J. (2011). Experience-Driven Procedural Content Generation. IEEE Trans. Affective Comput. 2 (3), 147–161. doi:10.1109/T-AFFC.2011.6

CrossRef Full Text | Google Scholar

Zander, T. O., and Kothe, C. (2011). Towards Passive Brain-Computer Interfaces: Applying Brain-Computer Interface Technology to Human-Machine Systems in General. J. Neural Eng. 8 (2), 025005. doi:10.1088/1741-2560/8/2/025005

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: virtual reality, affect, emotion, electrophysiology, visual design, visualization

Citation: Pinilla A, Garcia J, Raffe W, Voigt-Antons J-N, Spang RP and Möller S (2021) Affective Visualization in Virtual Reality: An Integrative Review. Front. Virtual Real. 2:630731. doi: 10.3389/frvir.2021.630731

Received: 18 November 2020; Accepted: 23 July 2021;
Published: 06 August 2021.

Edited by:

Doron Friedman, Interdisciplinary Center Herzliya, Israel

Reviewed by:

Louis Nisiotis, University of Central Lancashire, Cyprus
Anja S. Göritz, University of Freiburg, Germany

Copyright © 2021 Pinilla, Garcia, Raffe, Voigt-Antons, Spang and Möller. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Andres Pinilla, andres.pinilla@qu.tu-berlin.de

ORCID: Andres Pinilla, orcid.org/0000-0002-0812-7896; Jaime Garcia, orcid.org/0000-0001-5718-1605; William Raffe, orcid.org/0000-0001-5310-0943; Jan-Niklas Voigt-Antons, orcid.org/0000-0002-2786-9262; Robert P. Spang, orcid.org/0000-0001-6580-9060; Sebastian Möller, orcid.org/0000-0003-3057-0760

Download