Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 26 May 2020
Sec. Psychology of Language

From Abstract Symbols to Emotional (In-)Sights: An Eye Tracking Study on the Effects of Emotional Vignettes and Pictures

  • 1Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
  • 2Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany

Reading is known to be a highly complex, emotion-inducing process, usually involving connected and cohesive sequences of sentences and paragraphs. However, most empirical results, especially from studies using eye tracking, are either restricted to simple linguistic materials (e.g., isolated words, single sentences) or disregard valence-driven effects. The present study addressed the need for ecologically valid stimuli by examining the emotion potential of and reading behavior in emotional vignettes, often used in applied psychological contexts and discourse comprehension. To allow for a cross-domain comparison in the area of emotion induction, negatively and positively valenced vignettes were constructed based on pre-selected emotional pictures from the Nencki Affective Picture System (NAPS; Marchewka et al., 2014). We collected ratings of perceived valence and arousal for both material groups and recorded eye movements of 42 participants during reading and picture viewing. Linear mixed-effects models were performed to analyze effects of valence (i.e., valence category, valence rating) and stimulus domain (i.e., textual, pictorial) on ratings of perceived valence and arousal, eye movements in reading, and eye movements in picture viewing. Results supported the success of our experimental manipulation: emotionally positive stimuli (i.e., vignettes, pictures) were perceived more positively and less arousing than emotionally negative ones. The cross-domain comparison indicated that vignettes are able to induce stronger valence effects than their pictorial counterparts, no differences between vignettes and pictures regarding effects on perceived arousal were found. Analyses of eye movements in reading replicated results from experiments using isolated words and sentences: perceived positive text valence attracted shorter reading times than perceived negative valence at both the supralexical and lexical level. In line with previous findings, no emotion effects on eye movements in picture viewing were found. This is the first eye tracking study reporting superior valence effects for vignettes compared to pictures and valence-specific effects on eye movements in reading at the supralexical level.

Introduction

Imagine a future where the best-selling books aren’t the sole product of an author’s mind but the result of a machine learning assisted approach. A future with personalized phrases and e-books being able to predict your reading behavior. What would be the key to a future like this? Concerning psychological reading research, it would certainly require a stronger focus on ecologically valid study materials (Jacobs, 2015a; Pinheiro et al., 2017; Xue et al., 2019). In this context, most empirical results, especially from studies using eye tracking, are limited to the level of single words or experimentally controlled sentences (Clifton et al., 2007; Radach et al., 2008; Radach and Kennedy, 2013; Wallot et al., 2013). By contrast, reading as one of the essential daily activities commonly involves context information and goes along with emotional processes (e.g., Jacobs, 2011; Mar et al., 2011; Bohn-Gettler, 2019). This leads unavoidably to the second key point. The future scenario calls for a better understanding of affective responses elicited by ecologically valid text stimuli. In discourse comprehension, many studies made use of textual materials and indicated, for example, that the emotions of protagonists were represented in situation models even when not explicitly mentioned (e.g., Gernsbacher et al., 1992; Gygax et al., 2003, 2004, 2007). However, such studies largely neglected both reader’s emotions and valence-driven effects.

The scientific investigation of affective processes necessitates the availability of standardized stimuli that are reliably able to elicit emotions under controlled, experimental conditions. At present, researchers have access to a variety of cross-validated, international databases addressing different perceptual modalities and providing normative ratings. However, verbal stimulus sets are again restricted to the level of words (e.g., Bradley and Lang, 1999; Redondo et al., 2007; Võ et al., 2009; Eilola and Havelka, 2010; Briesemeister et al., 2011; Soares et al., 2012; Moors et al., 2013; Söderholm et al., 2013; Warriner et al., 2013; Montefinese et al., 2014; Schmidtke et al., 2014; Riegel et al., 2015; Imbir, 2016a) or single sentences (Bradley and Lang, 2007; Imbir, 2016b; Pinheiro et al., 2017). In addition, their use as an emotion induction method has been predominated by visual stimuli such as pictures (Dan-Glauser and Scherer, 2011). However, only little attention has been paid to the comparison of verbal and visual stimulus domains. For example, early meta-analyses on the efficiency of emotion induction procedures neither differentiated between stories and films nor included static pictures (Gerrards-Hesse et al., 1994; Westermann et al., 1996). Even when differentiated and included, the heterogenous definition of vignettes (cf. Siedlecka and Denson, 2019) made it difficult to draw conclusions about their suitability.

With respect to simpler linguistic materials, Schlochtermeier et al. (2013) were able to highlight potentially beneficial effects of words and phrases on evaluative judgments. More specifically, their behavioral results revealed stronger valence ratings for verbal compared to pictorial stimuli. Additionally, no differences in reaction times and arousal ratings were reported contradicting the commonly assumed privileged processing of pictures within the area of emotion induction (e.g., Azizian et al., 2006; Kensinger and Schacter, 2006; Seifert, 1997). Similarly, Bayer and Schacht (2014) were able to show that both words, faces, and pictures elicit early and late emotion effects as indicated by event-related potentials. Furthermore, words and pictorial materials were perceived as comparably strong in their emotional valence and arousal.

The present study was designed to face some of the aforementioned challenges by introducing a set of ecologically valid, emotion-inducing vignettes verbalizing the semantic content of pre-selected pictures from the Nencki Affective Picture System (NAPS; Marchewka et al., 2014). For both textual and pictorial stimuli, eye movements, as sensitive measure for cognitive and affective processes (Rayner, 1998, 2009), were recorded and analyzed. Accordingly, our study aimed at extending prior findings with two main objectives: (1) the comparison between more complex verbal (i.e., textual) and visual affective materials in the area of emotion induction, and (2) the influence of emotional content on reading behavior in ecologically valid texts.

The article is organized as follows. After reviewing past research on emotion in reading and picture viewing, effects of Valence Category (i.e., positive, negative) and Stimulus Domain (i.e., textual, pictorial) on ratings of Perceived Valence and Arousal are analyzed. Second, the influence of (perceived) textual valence on reading times of both supralexical (i.e., text level) and lexical units (i.e., word level) is examined. Lastly, the role of pictorial valence on the execution of fixations (i.e., Mean Fixation Duration, Total Number of Fixations) and saccades (i.e., Mean Saccade Amplitude) is illustrated. While eye tracking data are reported for both stimulus domains, the thematic focus and innovation of the present article strongly lies on the emotional processes evoked by linguistic stimuli.

Emotion in Reading and Picture Viewing

Emotion in Reading

The expression of emotion belongs to the crucial functions of human language (Bühler, 1934; Koelsch et al., 2015). However, for a long time affective processes during reading have only played a minor role in empirical research (Jacobs, 2011). Nowadays, there is empirical evidence supporting the behavioral and neuronal effects of the emotion potential of both simple and complex linguistic materials (e.g., Citron, 2012; Hsu et al., 2014, 2015a,b,c; Jacobs et al., 2015, 2016a; Lüdtke and Jacobs, 2015). More precisely, differences in the processing of neutral and emotional verbal stimuli are emphasized at the lexical (Kuchinke et al., 2005; Kissler et al., 2006; Herbert et al., 2008; Hofmann et al., 2009; Kousta et al., 2009; Kissler and Herbert, 2013; Recio et al., 2014), sentential (Jiang et al., 2014; Ding et al., 2015; Knickerbocker et al., 2015; Lüdtke and Jacobs, 2015), and text levels (Altmann et al., 2012, 2014; Hsu et al., 2015a,b,c; Lehne et al., 2015; Ballenghein et al., 2019). According to Lüdtke and Jacobs (2015), emotional words are characterized by their affective meaning or explicit expression of emotion. In comparison to their neutral counterparts, they are assumed to possess privileged access to attentional and cognitive resources (e.g., Kuchinke et al., 2005; Kousta et al., 2009; Ding et al., 2015).

Supporting evidence for this was first provided by behavioral studies and experiments using EEG and fMRI (Citron, 2012, for review). Interestingly, also eye movement studies indicated differences in the processing of emotional compared to neutral words (Scott et al., 2012; Knickerbocker et al., 2015). These studies examined emotionally valenced (i.e., positive, negative) and neutral target words embedded in a single sentence structure. Word frequency was considered as additional manipulation by Scott et al. (2012). In both experiments, early measures of processing (e.g., single and first fixation duration) indicated faster reading of emotional words compared to neutral ones. Moreover, emotional valence seemed to be of similar advantage at later processing stages as reflected in shorter total reading times, less regressions, and shorter second pass reading (Knickerbocker et al., 2015). However, valence-specific effects remained unexplored in this comparison. Both studies replicated results from EEG and fMRI studies indicating that emotional words are easier to process than their neutral counterparts while highlighting some differences when comparing emotionally positive and negative words. In this context, modulatory effects of word frequency were reported (Scott et al., 2012). More specifically, negative valence was only found to be beneficial when targets were characterized by a low frequency. In contrast, processing advantages of emotionally positive words emerged robustly under all experimental conditions.

Following a dimensional approach of emotion, words’ emotion potential can be empirically and computationally quantified in a two-dimensional space with valence representing their polarity and arousal their intensity (Võ et al., 2006, 2009; Scott et al., 2012; Recio et al., 2014; Jacobs, 2019). Since the two variables are strongly intercorrelated, high arousal commonly goes in line with extreme valence (Bradley and Lang, 1994; Lang et al., 2008; Hofmann et al., 2009; Citron, 2012; Jacobs et al., 2015). Moreover, emotionally negative words tend to reach higher values than emotionally positive ones (e.g., Võ et al., 2009). Concerning valence-specific effects, positive events (e.g., words, sentences) are often associated with accelerated reactions and facilitated word processing (Kousta et al., 2009; Briesemeister et al., 2011; Lüdtke and Jacobs, 2015). In case of negative valence, the oftentimes inconclusive effects are mainly explained by the interactive relationship with the dimension of arousal. Thus, emotionally negative words are mainly associated with shorter reaction times when having high arousal values (Larsen et al., 2008; Hofmann et al., 2009; Recio et al., 2014). In sum, the current evidence supports the notion of superior processing of emotionally positive and high-arousal negative words.

Can we expect similar results when manipulating valence at the supralexical, textual level? According to Bestgen (1994), we can act on the assumption that there is a high correlation between the different processing levels. By collecting valence ratings of four texts and their constituting sentences and words, significant correlations between the three processing levels were shown. Similarly, Whissell (2003) demonstrated that valence and arousal ratings of words from the Dictionary of Affect in Language (Whissell and Dewson, 1986) can be used as an estimator of the affective tone of excerpts of romantic poetry. Finally, Hsu et al. (2015b) computed mean and spread measures of valence and arousal for the words of 120 text passages from the Harry Potter novels. Their results indicated that mean lexical valence values can account for approximately 28% of the variance in subjective valence ratings of the text units. Taken together, previous results suggest that the valence of supralexical units like the present vignettes can be – in its simplest form – predicted (at least approximately, cf. Lüdtke and Jacobs, 2015) as a function of the valence of their constituting words (Jacobs, 2015b).

Vignettes as Controlled More Natural Reading Material

As already pointed out, a majority of reading studies fails to go beyond the level of single words or non-literary constructed sentences, i.e., so-called textoids (Bailey and Zacks, 2011). Although this experimental approach offers the possibility to test specific assumptions, the results can only be generalized to a limited extent (Clifton and Staub, 2011). Frazier and Rayner (1982) were already able to show that fixation times are influenced by phrase structure. The context in which a word is presented plays a similarly crucial role (Kuperman et al., 2010; Clifton and Staub, 2011; Wallot et al., 2013). Hence, the use of single words neglects the entangled effects of both syntactic and supralexical semantic features (Boston et al., 2008). To overcome such limitations, existing narratives became an attractive alternative. In this regard, short stories (e.g., Altmann et al., 2012, 2014) and fairy tales (Wallentin et al., 2011), poems (Lüdtke et al., 2014; Jacobs et al., 2016b; Xue et al., 2019), excerpts of books such as the Harry Potter novels (e.g., Hsu et al., 2014, 2015a,b,c), “The House Of The Scorpion” (Wallot et al., 2013), “The Sandman” (Jacobs, 2015b; Lehne et al., 2015; Jacobs and Lüdtke, 2017), “One Boy’s Day” (Speer et al., 2009), “Dubliners” (Cupchik et al., 1998), “Hurricane Hazel” (Cupchik and Laszlo, 1994), and newspaper articles (Kennedy and Pynte, 2005) have been used as objects of reading research. While the results thus obtained might be of high ecological validity, they might as well leave ample degrees of freedom for their interpretation (cf. Clifton and Staub, 2011).

What if we seek to combine the benefits of both short textoids and natural reading materials? In case of prose, such a compromise can be found in the construction of vignettes. The term encompasses short, written descriptions of fictitious situations and/or persons (Poulou, 2001). Vignettes usually contain background information and offer readers a base for evaluative judgments (Huebner, 1991; Poulou, 2001). They have been used in the context of teaching (Finch, 1987; Brophy and McCaslin, 1992; Gavrilidou et al., 1993; Poulou, 2001), in the appraisal (Robinson and Clore, 2001), cognitive (Filik and Leuthold, 2013), and emotion recognition research (Camras et al., 1983; Reichenbach and Masters, 1983; Ribordy et al., 1988), to study the theory of mind (Ishii et al., 2004), situational empathy (de Wied et al., 2005), or emotion processing in healthy people (Wilson-Mendenhall et al., 2013), in patients with schizophrenia (Kuperberg et al., 2011), and borderline personality disorder (Levine et al., 1997). Complementary to these applications, the present study evaluates the usefulness of vignettes in the area of emotion induction.

Going Beyond Emotion Inferences

In discourse comprehension, vignettes have become prominent stimuli for studying emotional inferences that are specifically concerned with emotions experienced by characters of a story. One often cited set of 24 vignettes was first published by Gernsbacher et al. (1992). The short stories were constructed to examine how readers infer and represent emotional states of a protagonist that are not explicitly mentioned. Since then these stories have been widely used and adapted in further studies investigating the specificity and content of emotional inferences (Gygax et al., 2003, 2004, 2007; Gillioz et al., 2012; Gillioz and Gygax, 2017). It has been shown that emotional inferences are part of readers’ mental representations (Miall, 1989; Graesser et al., 1994), are rather general (e.g., Gygax et al., 2003, 2004), and may only refer to certain parts (e.g., behavioral descriptions) of a multi-componential emotion construct (Gygax et al., 2007; Gillioz and Gygax, 2017).

Most of the above-mentioned studies involved manipulations of a target sentence containing either a matching or mismatching emotion term (e.g., Gygax et al., 2003, 2004) or behavioral description (e.g., Gygax et al., 2007) and focused on the analysis of reading times for target sentences to explore effects of consistent versus inconsistent emotional information. They neither examined emotions elicited in the reader nor compared reactions to positive and negative valences. One exception was published by León et al. (2015). The authors constructed short texts of four sentences possessing either a positive, negative, or neutral valence. The last sentence either ended with a related (e.g., the word happy in an emotionally positive context), non-related, or non-word as target word for which a lexical decision had to be performed. Analyses of corresponding reaction times revealed faster reactions to both related and non-related words when presented in an emotionally positive context compared to a negative one.

Notably, effects of emotional valence are likely to go beyond the level of inferences. In a recent study conducted by Megalakaki et al. (2019), native French speakers were instructed to read easily understandable texts varying in their emotional valence and intensity. Subsequently, participants were asked to answer comprehension questions on different levels of discourse (i.e., textbase, surface level, inference level). Analyses of their answers revealed that positive valence facilitated the comprehension of textual contents (surface level) whereas negative valence favored the construction of inferences (inference level). In addition, high emotional intensity promoted the understanding of emotionally positive texts but impeded the comprehension in a negative valence context. Hence, textual valence was found to influence the comprehensibility of reading materials. More importantly, the influential role of valence needs to be considered in eye tracking studies since eye movements have been shown to be affected by both text difficulty (Rayner et al., 2006; Lüdtke et al., 2019) and valence (e.g., Scott et al., 2012; Knickerbocker et al., 2015; Ballenghein et al., 2019).

In the present study, emotionally positive and negative vignettes were constructed to examine their emotion induction potential and analyze effects of emotional content on reading behavior. Although emotional vignettes have been applied in the research field of emotion inferences, valence-specific effects have largely remained unexplored. Moreover, most of the above-mentioned studies made use of the onlooker perspective (i.e., using the pronoun “he/she”). However, the personal perspective (i.e., using the pronoun “you”) was found to cause both greater internalization of emotional narratives (Brunyé et al., 2011) and stronger effects of positive emotion induction (Child et al., 2020). The thus provoked empathic engagement is assumed to facilitate the presence of immersion (Jacobs, 2015b). While reading, we start forgetting about the physical world around us and feel transported into the book’s fictitious setting. As stated by the Neurocognitive Poetics Model of literary reading (NCPM; Jacobs, 2011, 2015b), immersion leads to faster reading (i.e., shorter fixation, longer saccades) making it directly relevant to the analysis of eye movements. Hence, when examining online reading behavior in emotional vignettes, the immersion potential should be considered as it might interact with both the valence manipulation and the reading behavior.

Emotion in Picture Viewing

Living in the digital age, we are constantly exposed to pictorial stimuli such as personal photos or social media posts. Related research has highlighted the influential role of both pictures in general and their semantic relevance on attentional processes during visual perception (Pilarczyk and Kuniecki, 2014; Keib et al., 2016). At present, it is widely acknowledged that fixations are biased toward informative regions of our perceptual field (Henderson, 2003). These are areas that either pop out because they are very different with respect to low-level visual features (i.e., high visual saliency) or because they inform about the emotional meaning (i.e., high semantic relevance). When comparing both influential factors, regions of semantic richness tend to attract more fixations than visual salient ones (e.g., Pilarczyk and Kuniecki, 2014). Consequently, eye movements are substantially guided by the distribution of emotional contents (cf. Budimir and Palmović, 2011).

Previous studies examining effects of emotional pictures have stressed processing differences of positive, negative, and neutral pictures (Olofsson et al., 2008, for review). In this context, the privileged processing of emotional stimuli has been explained in terms of their evolutionary and motivational relevance. Although the special role of negatively valenced stimuli (e.g., snakes) has been put forward (e.g., Fox et al., 2000; Yiend and Mathews, 2001; Calvo et al., 2006), the majority of results rather supports the existence of arousal-driven, valence-independent effects. Hence, emotional compared to neutral stimuli were found to initially attract and maintain attentional processes (Calvo and Lang, 2004; Calvo and Avero, 2005; Nummenmaa et al., 2006; Carniglia et al., 2012).

With respect to the present study, results from previous eye tracking experiments using a free viewing paradigm with pictures presented one at a time are of particular interest as this is the paradigm used for emotion induction. Eye movements can serve as an indicator for (overt) attentional processes since both are tightly coupled. Thus, an attentional shift is usually linked to the execution of saccades (Findlay and Gilchrist, 2003). To the best of our knowledge, only a handful of eye tracking studies made use of the above-mentioned paradigm applied in the area of emotion induction (Christianson et al., 1991; Bradley et al., 2008, 2011; Niu et al., 2012; Yang et al., 2012; Lanatà et al., 2013; Henderson et al., 2014). Among them, only two allow for the comparison of eye movements on positively and negatively valenced pictures. In this context, Bradley et al. (2011) presented emotionally charged and neutral pictures for a free viewing period of 6 seconds (s). Eye movements were analyzed in terms of three parameters: number of fixations, average fixation duration, and total scan path (i.e., length of all saccades). Their results showed that emotional compared to neutral pictures possessed longer scan paths and attracted more as well as shorter fixations. No valence-specific effects on eye movements were found. Niu et al. (2012) addressed the research question to which extent gaze behaviors are driven by visually salient versus affective features. Consequently, their analyses were performed on the level of pre-defined areas of interest. High arousal proved to increase the probability of fixations on affective regions independent of the pictorial valence.

In sum, while indicating differences between affective and neutral pictures, studies with a comparable experimental design suggest an absence of valence-specific effects on eye movements in healthy participants. It should be noted that the majority of the above-mentioned studies worked with pictures from the International Affective Picture System (IAPS; Lang et al., 2008). Despite the widespread use of the cross-validated database, three associated issues should be considered (Dan-Glauser and Scherer, 2011; Marchewka et al., 2014). First, the vast majority of pictures contains people as primary objects limiting its usefulness when studying the influence of content-specific effects. Second, due to its frequent application, processes of familiarity might occur and possibly reduce emotion-inducing effects. Third, some stimuli are outdated and of lower visual quality. To address these issues, a comprehensive, modern alternative is provided by the NAPS (Marchewka et al., 2014).

Emotion Induction: The Role of Stimulus Domains

So far, evidence supporting the privileged sensory and cognitive processing of emotional compared to neutral stimuli has been provided for both pictorial and verbal materials. However, pictures have been claimed to induce stronger emotional reactions than words (e.g., De Houwer and Hermans, 1994; Larsen et al., 2003; Bayer and Schacht, 2014, for review). This view has largely been supported by evidence from EEG (e.g., Azizian et al., 2006) and fMRI (e.g., Kensinger and Schacter, 2006) studies stressing temporal and topographical differences. According to dual coding theories (e.g., Glaser, 1992), pictorial and verbal materials vary with respect to their processing channels and semantic representations. In this context, the reported superior processing of pictures compared to words has been explained by their more direct access to the semantic system (e.g., Seifert, 1997). Hence, linguistic stimuli, as abstract and learned symbols, were assumed to require additional translational processes for the extraction of meaning (cf. Schlochtermeier et al., 2013). With respect to emotional valence, different processing biases have been reported for pictures and words (Bayer and Schacht, 2014). More precisely, pictures were associated with a negativity bias whereas verbal stimuli were claimed to show a preferential processing of positive valence.

Notably, differences between both stimulus domains have mostly been reported when the processing of mere perceptual features was sufficient to perform the task (e.g., Pegna et al., 2004; Hinojosa et al., 2009; Schacht and Sommer, 2009a; Rellecke et al., 2011). In this connection, many studies neglected to collect evaluative judgments and thus missed the analysis of perceived emotion effects. In contrast, when semantic processing was demanded, a comparable effectivity of both stimulus domains has been put forward (Schacht and Sommer, 2009a; Schlochtermeier et al., 2013; Tempel et al., 2013; Bayer and Schacht, 2014). For example, the EEG study by Bayer and Schacht (2014) compared effects of emotional words and pictures (positive, negative, neutral) using a recognition memory task. Both stimulus domains elicited emotion effects at early and late processing stages. Besides, collected ratings of valence revealed that words were perceived as more pleasant within the groups of positive and neutral stimuli. For arousal, no main effect of stimulus domain was found. Hence, words were not rated as less arousing in general. Interestingly, when reducing the complexity of pictures (e.g., by using black-white pictograms), superior emotion effects of words compared to their pictorial counterparts were reported (Schlochtermeier et al., 2013; Tempel et al., 2013). In these imaging studies (i.e., EEG, fMRI), effects of emotionally positive and neutral stimuli were examined for both materials (i.e., pictures, words) while accounting for differences in stimulus complexity. At the neural level, words were found to elicit more widespread and larger activities than pictures. Moreover, positively valenced words attracted faster (Tempel et al., 2013) as well as stronger (Schlochtermeier et al., 2013) subjective ratings of emotional valence.

Most of the above-mentioned studies applied words, especially nouns (e.g., Hinojosa et al., 2009, 2010; Bayer and Schacht, 2014), to represent the verbal stimulus domain. However, processing differences between words and pictures might be partly mediated by effects of stimulus complexity. When controlling for this confounding factor by using more complex linguistic materials (e.g., phrases), processing differences incline to disappear (Schlochtermeier et al., 2013). In sum, previous studies suggest that both the task demand and the stimulus complexity are of crucial role when comparing the emotion induction potential of verbal stimuli and pictures (e.g., Hinojosa et al., 2010; Bayer and Schacht, 2014). To the best of our knowledge, this is the first article including the direct (i.e., within-subject design) comparison of emotional vignettes and pictures with shared semantic content. Moreover, the present study addresses the suggested importance for individual ratings being recorded within the same group of participants as further variables of interest (e.g., physiological activity). For example, it has been shown that evaluative judgments of arousal differ from provided normative ratings (Olofsson et al., 2008). In line with this perspective, we aimed to operationalize valence-specific effects through the perceived (i.e., subjective ratings) and not experimentally manipulated (i.e., positive, negative) valence. Hence, individual differences in the perception of emotional stimuli were anticipated and accounted for.

Hypotheses

The present study aimed to examine effects of emotional materials (i.e., vignettes, pictures) on (1) ratings of Perceived Valence and Arousal, (2) eye movements in reading, and (3) eye movements in picture viewing. We therefore selected 40 emotionally valenced (i.e., positive, negative) pictures and vignettes, respectively. We assumed that our valence manipulation would influence subjective ratings of both Perceived Valence and Arousal. More specifically, based on the strongly negative, linear relationship between valence and arousal reported for the consulted NAPS database (Marchewka et al., 2014), we expected that emotionally positive stimuli (i.e., vignettes, pictures) would, on average, be rated more positively (i.e., Perceived Valence) and as less arousing (i.e., Perceived Arousal) than emotionally negative ones.

Based on prior findings indicating stronger valence effects of emotionally positive words compared to pictures (e.g., Schlochtermeier et al., 2013; Tempel et al., 2013; Bayer and Schacht, 2014), we also suggested that emotionally positive vignettes would, on average, be rated more positively than emotionally positive pictures. Moreover, we assumed that this domain-specific effect would also apply to the negative valence category. Hence, we expected that emotionally negative vignettes would, on average, be perceived more negatively than emotionally negative pictures. With respect to subjective ratings of Perceived Arousal, there is evidence that words are able to induce arousal levels that are comparable to the ones elicited through pictures (e.g., Schlochtermeier et al., 2013; Bayer and Schacht, 2014). Consequently, it was assumed that stimulus domains (i.e., textual, pictorial) would not differ in their induced arousal levels.

With respect to effects of Perceived Valence on eye movements in reading of ecologically valid stimuli, reading times for both vignettes and their constituting words were of primary interest. Based on previous results showing faster processing of emotionally positive words, sentences, and texts (e.g., Kousta et al., 2009; Briesemeister et al., 2011; Lüdtke and Jacobs, 2015; Ballenghein et al., 2019), we assumed that vignettes perceived as emotionally positive would, on average, attract shorter reading times than vignettes perceived as emotionally negative. In this context, the first eye tracking study examining valence-specific effects at the supralexical level indicated shortest reading times for positive, followed by negative, and lastly neutral narratives (Ballenghein et al., 2019).

In line with reported correlations between lexical and textual valence ratings (Bestgen, 1994; Whissell, 2003; Hsu et al., 2015b; Jacobs, 2015b), we expected that Perceived Valence would likewise affect reading times at the lexical level (i.e., words). Since our study refers to effects of an affective semantic superfeature (valence; Jacobs et al., 2016a), content words were of main interest (cf. Bestgen, 1994). Thus, we expected that content words constituting vignettes perceived as emotionally positive would, on average, attract shorter reading times than content words constituting vignettes perceived as emotionally negative. Since it remained inconclusive at which processing stage valence-specific effects on reading times for lexical units (i.e., words) would become evident, eye tracking measures reflecting both early (e.g., first fixation duration) and later (e.g., word total reading time) processes were considered (cf. Lüdtke et al., 2019).

Lastly, we aimed to examine the influence of Perceived Valence on eye movements during picture viewing. In line with previous eye tracking studies suggesting an absence of valence-specific effects (e.g., Bradley et al., 2011; Niu et al., 2012), we assumed that pictures perceived as emotionally positive would attract scan (e.g., Mean Saccade Amplitude) and fixation (e.g., Total Number of Fixations) patterns that are similar to the ones provided by pictures perceived as emotionally negative.

Materials and Methods

In order to examine the above-stated hypotheses, eye movements of 42 participants were recorded while reading and viewing 40 emotional vignettes and pictures, respectively. Textual stimuli were constructed based on pre-selected emotional pictures and validated in two pilot studies. For both material groups, stimuli were presented one at a time and followed by an evaluative judgment task which required participants to assess the emotional valence and arousal of each stimulus. Accordingly, linear mixed-effects models were performed to analyze effects of valence (i.e., Valence Category, Valence Rating) and Stimulus Domain on (1) ratings of Perceived Valence and Arousal, (2) eye movements in reading, and (3) eye movements in picture viewing. Since the present study was designed to address effects of emotion induction, individual ratings of valence (and arousal) were used to define the emotional quality of our stimuli (cf. Rubo and Gamer, 2018).

Participants

Forty-two native German speakers (33 female, 1 non-binary; Mage = 23.81 years, SDage = 5.41, age range: 18–44 years) gave their informed, written consent for participation and further use of their anonymized data. They were recruited through collegiate tutorials in the Bachelor’s degree of Psychology at Freie Universität Berlin as well as from announcements on social media. Participants either received course credit (88.1%) or took part voluntarily. All of them had normal or corrected-to-normal vision. Thirty-three participants (78.6%) named a general qualification for university entrance as highest level of education. The study was approved by the ethics committee of the Department of Education and Psychology at Freie Universität Berlin.

Recording of Eye Movements

Eye movements were recorded with an SR Research EyeLink 1000 tower-mounted eye tracker providing a sampling rate of 1000 Hz (SR Research Ltd., Mississauga, ON, Canada). Due to the chin-and-head rest, head movements could be reduced to a minimum. Recording of eye movements occurred exclusively during stimulus presentation in which only the right eye was tracked. The experiment was built using the SR Research Experiment Builder software (version 1.10.1630)1. Stimuli were presented on a 19-inch LCD monitor with a resolution of 1024 × 768 pixels and a refresh rate of 120 Hz. The distance between the participant’s eyes and the monitor measured approximately 65 centimeters. At the beginning of the experiment, a standard 9-point calibration was used to ensure a spatial resolution error of less than 0.5° of visual angle. To avoid a permanent repetition of this time-consuming and distracting procedure, each reading and viewing trial started with two sequentially presented fixation crosses (Times New Roman, 20 point-size). They were either positioned above the first reading line at the right and left corner of the display or arranged at the upper right and left corner of the subsequently presented picture. For each of them, a rectangular area of interest (AOI; 70 × 62 pixels) was defined. When a total fixation time of 500 milliseconds (ms) was registered in each AOI, stimulus presentation started automatically. Fixations and saccades were identified using the EyeLink 1000 parser (velocity threshold = 30°/sec, acceleration threshold = 8000°/sec2).

Stimuli

Emotional stimuli were selected and constructed following a stepwise procedure. As previously stated, pictures and vignettes were intended to provide comparable semantic information. Hence, the construction process started with the collection of 48 emotion-inducing pictures (24 emotionally positive, 24 emotionally negative) from the NAPS (Marchewka et al., 2014). The standardized, high-quality database includes normative ratings for over 1,000 realistic pictures. A major advantage for eye tracking studies concerns the availability of information on physical properties. Since eye movements are known to be affected by low-level visual features such as complexity, luminance, or contrast (e.g., Bradley et al., 2007; Pilarczyk and Kuniecki, 2014), we aimed to control for these confounding factors. In sum, the following inclusion criteria were applied: First, pictures had to possess normative valence ratings either below four or above six2 to minimize the potential overlap between both valence categories. Second, valence categories were not allowed to vary with respect to the following physical parameters: luminance, contrast, JPEG size, color composition (i.e., LABL, LABA, LABB), entropy, and format (landscape; 1600 × 1200 pixels). Third, valence categories had to consist of pictures similarly distributed among the provided content categories [i.e., animals, faces, people, objects, landscapes; for the final stimulus set: χ2(3,N = 40) < 1, p = 0.97, R2 < 0.01].

Based on the selected pictures, 48 vignettes verbally reproducing the pictorial information were constructed by the first author and Ilai Jess. To avoid a systematic influence of the narrative perspective, readers were continuously addressed in the second person singular (e.g., Miall and Kuiken, 2001; Brunyé et al., 2009). The text length was kept between 85 and 96 words. To ensure a high comprehensibility and emotion induction potential, an online pilot study was conducted via SoSci Survey (Leiner, 2019)3. Fifty-three people were recruited from announcements on social media and either received course credit or participated voluntarily (33 female, 15 non-binaries; Mage = 33.81 years, SDage = 14.56, age range: 17–71 years). Questions referring to Valence, Arousal, Comprehensibility, Immersion Potential, and Emotion Induction Potential were rated after self-paced reading of the randomly ordered 48 vignettes (24 emotionally positive, 24 emotionally negative). Valence and Arousal were rated on a 9-point rating scale, for the three other measures 5-point rating scales were applied.

In a next step, potentially problematic vignettes were identified based on the average valence ratings. In consideration of Comprehensibility and the physical parameters of the corresponding pictures, eight vignettes were excluded, nine additional ones revised and rated for a second time (N = 13, Mage = 35.69 years, SDage = 11.39, age range: 22–60 years). Table 1 includes the descriptive statistics of the final stimulus set (20 emotionally positive, 20 emotionally negative vignettes), Table 2 entails an example for each valence category. Information on the corresponding pictures can be found in Table 3. The results indicated that the vignettes are easy to understand and capable of inducing negative and positive emotional responses. Furthermore, positive and negative valence groups showed differences on the rated dimensions. Emotionally negative vignettes were, on average, rated higher with respect to Arousal [t(38) = −11.95, p < 0.001, R2 = 0.79] and Emotion Induction Potential [t(38) = −4.04, p < 0.001, R2 = 0.30] whereas positive ones seemed to be easier to understand [t(38) = 2.52, p = 0.02, R2 = 0.14] and better suited to put the reader in the perspective of the text [t(38) = 2.73, p < 0.01, R2 = 0.16]. Most importantly, valence ratings supported the success of our valence manipulation: emotionally positive vignettes were, on average, perceived more positively than emotionally negative ones [t(38) = 33.97, p < 0.001, R2 = 0.97]. As expected, ratings of Valence and Arousal possessed a strong negative linear correlation [r = −0.91, t(38) = −13.36, p < 0.001, R2 = 0.82]. This negative correlation could also be observed between Valence and Arousal ratings reported for the NAPS pictures [r = −0.96, t(38) = −20.9, p < 0.001, R2 = 0.92].

TABLE 1
www.frontiersin.org

Table 1. Statistical parameters for the emotional vignettes.

TABLE 2
www.frontiersin.org

Table 2. Examples of the self-constructed, emotion-inducing vignettes.

TABLE 3
www.frontiersin.org

Table 3. Statistical parameters for the emotional pictures.

To establish comparable initial situations in the eye tracking experiment, three emotionally neutral vignettes4 were additionally constructed based on corresponding neutral NAPS pictures (characterized on average by the same physical parameters as the emotional pictures; Valence: M = 4.52, SD = 0.4; Arousal: M = 5.36, SD = 0.45). The entire set of constructed vignettes including the names of their corresponding NAPS pictures is provided in the Supplementary Tables S1S3.

Design and Procedure

A repeated measures design was implemented (i.e., each subject viewed and read the entire stimulus set). Stimulus domains were presented in separate blocks. The order of blocks was counterbalanced across participants. Within each block, stimuli were presented in a pseudo-randomized sequence so that no more than two stimuli of the same valence category were presented successively.

The study was conducted in a sound-attenuated room separated from the daylight. After arriving, participants were informed about the procedure as well as the option to quit the experiment at any time without facing any consequences. The experiment started with a standard 9-point calibration and was followed by a sequential presentation of three emotionally neutral stimuli in order to match the initial situation between participants. Afterward, the participant’s mood was measured on a 7-point, non-verbal rating scale offering the possibility to account for mood differences5. Ratings indicated a slightly positive mood before the presentation of both emotional vignettes (M = 5.26, SD = 0.59) and pictures (M = 5.19, SD = 0.71).

Subsequent to the rating, emotional stimuli of the first block (i.e., vignettes or pictures) were presented. Each trial started with two sequentially appearing fixation crosses which had to be looked at for 500 ms, respectively. For the vignettes, reading speed was self-controlled allowing participants to go back and forth within a single page as often as they wanted. Each vignette was presented on a single page (eight to eleven lines) and could be left by a single mouse click. Vignettes were written in a sans-serif font (Tahoma) with 17-point letter size and presented left-aligned in the center of the monitor. In order to maximize the accuracy of the recordings, double-spacing was used. Participants were instructed to read each story for comprehension. Pictures (800 × 600 pixels) were presented for a fixed viewing period of 3 s in the center of the display. Participants were instructed to freely look at each picture for the whole presentation time. After each stimulus presentation, participants were instructed to perform an evaluative judgment task. More specifically, subjects were asked to assess each stimulus in terms of its emotional valence (i.e., How positive or negative do you rate the text/picture?) and arousal (i.e., How calming or exciting do you rate the text/picture?). Answers were given on 9-point Self-Assessment-Manikin scales (SAM; Lang, 1980; Suk, 2006). Rating scales were displayed sequentially in the center of the monitor. No time restrictions were provided.

When finished with the first block, an online survey referring to demographic variables, reading habits, imagination, and empathy was answered at a separate table. At their own free discretion, participants returned to the eye tracker and completed the same procedure on the remaining stimulus domain starting again with the 9-point calibration. In sum, an experimental session lasted approximately 60 minutes. Figure 1 provides an illustration of the experimental procedure.

FIGURE 1
www.frontiersin.org

Figure 1. Graphic depiction of the experimental procedure. Participants performed an evaluative judgment task on textual and pictorial stimuli, respectively. The order of stimulus domains (i.e., pictorial, textual) was balanced across participants. To ensure comparable initial situations, each experimental session started with the presentation of three emotionally neutral stimuli. At the beginning of each trial, two serially presented fixation crosses had to be looked at for 500 ms. After assessing participants’ mood, 40 emotionally valenced stimuli were sequentially displayed while recording eye movements. Pictures were viewed for a fixed period of 3 s. Reading of vignettes was self-controlled. Ratings of perceived valence and arousal were measured using a 9-point Self-Assessment-Manikin scale (SAM; Lang, 1980; Suk, 2006). After completion of the first presentation block, an online survey was conducted.

Data Analysis

In line with the tripartite structure of our hypotheses, the following sections will be arranged into three subparts: (1) analysis of evaluative judgments, (2) analysis of eye movements in reading, and (3) analysis of eye movements in picture viewing. All analyses refer to data for the emotionally positive and negative stimuli. Due to their experimental function and the limited number, data for the emotionally neutral vignettes and pictures will only be reported in terms of descriptive statistics. However, the same preprocessing steps were applied.

Data Preprocessing

For the 40 emotional vignettes and pictures, we collected 3,360 ratings for both Perceived Valence and Arousal (42 participants à 80 ratings). As two trials had to be excluded due to errors during the export of data, 3,358 individual ratings and reaction times for each subject and item (i.e., pictures, vignettes) could be used for statistical analysis.

Eye tracking data were preprocessed using the EyeLink Data Viewer (version 1.11.900)6. Fixations less than 80 ms were either merged with nearby fixations (distance of less than one degree) or removed from further analysis. Based on automatically defined AOIs, text data were exported on the level of single words (150,899 data points). Further aggregation of data and preprocessing were run in JMP Pro 14 for Mac OS X7. The selection of eye tracking parameters resulted from our hypotheses on reading times for both supralexical (i.e., vignettes) and lexical units (i.e., words). For the analysis at the supralexical level, text total reading time as the sum of all fixations, saccadic movements, and blinks was computed. For the analysis at the lexical level, we aimed to study a measure associated with early and a measure associated with both early and late processes of word recognition and comprehension (cf. Lüdtke et al., 2019). To analyze immediate effects of Perceived Valence, the commonly reported duration of the first fixation on each word was extracted (Hyönä et al., 2003; Kuperman and Van Dyke, 2011). As late measure, word total reading time (afterward called total reading time, TRT) defined as the total sum of all fixation durations on a word was used (Boston et al., 2008). Since valence groups of emotionally positive and negative vignettes varied slightly in their text lengths [t(38) = 2.07, p = 0.046, R2 = 0.10], we accounted for the difference by computing Reading Speed [in words per minute (wpm)], mean First Fixation Duration (mean FFD in ms), and mean Total Reading Time (mean TRT in ms) for each subject and vignette.

To calculate mean FFD and mean TRT, we first excluded all function words (articles, pronouns, conjunctions, auxiliary verbs, prepositions, particles, cardinal numbers, pronominal adverbs) as they lack or are poor in lexical or affective lexical meaning (cf. Segalowitz and Lane, 2000; Fiedler, 2011). Part-of-speech (POS) tagging was automatized using the freeware tagger TagAnt (Anthony, 2015). Like any POS tagger TagAnt produces error rates of approximately three percent (Manning, 2011), and thus obviously wrong classifications were corrected by hand. On the remaining 75,348 data points (see Table 4), extreme values were defined and excluded following a two-stage procedure. First, FFDs larger than 2,000 ms (six data points) and TRTs larger than 3,000 ms (five data points) were excluded. Second, outliers were defined based on the distributions of FFDs and TRTs within each subject and vignette. Words with standardized residuals larger or smaller than three were excluded [FFD: 655 data points (0.87%); TRT: 1,174 data points (1.56%)]. Based on the remaining data points, mean FFD and mean TRT were computed for each subject and vignette treating skipped words (mean skipping: 19.65%) as missing values. Taken together, the resulting data table contained 1,678 data points including information about mean FFD (in ms), mean TRT (in ms), and Reading Speed (in wpm) for each subject and item.

TABLE 4
www.frontiersin.org

Table 4. Number and percentage of content, function, and skipped words within each valence category.

With respect to the pictures, eye tracking data were exported on the level of trials for each participant (42 participants à 40 trials; 1,680 data points). Since we aimed to investigate whether effects of Perceived Valence are reflected in measures of both fixations and saccades, the selection of eye tracking parameters was inspired by Bradley et al. (2011). Hence, statistical analyses were performed on Mean Saccade Amplitude (i.e., the distance between two consecutive fixations; Radach and Kennedy, 2013), Total Number of Fixations (i.e., number of fixations within the viewing period of 3 s), and Mean Fixation Duration.

Statistical Analysis

All statistical analyses were run in R 3.5.1 for Mac OS X8. Since participants and items (i.e., vignettes, pictures) represented samples of larger populations, hypotheses were tested using linear mixed-effects models (LMM; Baayen et al., 2008). Following a confirmatory approach on real data, intercepts-only models with by-item and by-subject random intercepts were employed (cf. Bates et al., 2015a). In R, models were computed using the lmer-function from the lme4 package (Bates et al., 2015b) with restricted maximum likelihood estimation.

To obtain the optimal fixed-effects structure (i.e., trade-off between fit to the data and complexity), models were selected according to a backward-elimination procedure (cf. Barr et al., 2013). Starting with random-intercepts models accounting for all possible fixed-effects terms, predictor variables were successively excluded based on the strongest evidence (i.e., highest p-value). If the variable with the strongest evidence was involved in an interaction with less evidence, the predictor with the second highest evidence was excluded. In an iterative procedure, nested models differing in one degree of freedom (i.e., one fixed effect) were systematically compared using the anova-function from the stats package (R Core Team, 2019). To justify a reduction of fixed-effects terms, likelihood ratio tests were performed. Decisions were based on the statistical significance (p < 0.05) of the asymptotically chi-squared distributed likelihood ratio test statistic with one degree of freedom. If the likelihood of the simpler model was not significantly worse than the likelihood of the more complex model (p > 0.05), the former was favored.

For the analysis of evaluative judgments (i.e., Perceived Valence and Arousal), the following predictor variables were initially included: Valence Category (i.e., positive, negative), Stimulus Domain (i.e., pictorial, textual), and Mood Rating. To test for valence-specific effects, the interaction between Valence Category and Stimulus Domain was included. For the analysis of reading behavior at the supralexical (i.e., Reading Speed) and lexical level (i.e., mean FFD, and mean TRT), initial models consisted of the following predictor variables: Perceived Valence (i.e., Valence Rating), Perceived Arousal (i.e., Arousal Rating), Mood Rating, and three characteristics of the vignettes collected in the pilot studies (Comprehensibility, Immersion Potential, Emotion Induction Potential). Considering theoretically and empirically provided evidence (e.g., Võ et al., 2009; Marchewka et al., 2014), we included the interaction between Perceived Valence and Arousal. For the analysis of picture viewing (i.e., Mean Saccade Amplitude, Total Number of Fixations, and Mean Fixation Duration), the following predictor variables were initially included: Perceived Valence (i.e., Valence Rating), Perceived Arousal (i.e., Arousal Rating), and Mood Rating. Again, the interaction between Perceived Valence and Arousal was included. Detailed information on the mathematical formulation and lmer specification of all eight initial models are reported in the Supplementary Tables S4S11.

For the categorical variables (i.e., Valence Category, Stimulus Domain), effect coding was chosen. The metrical covariates were centered prior to analysis in order to avoid collinearity, increase probability of model convergence, and facilitate interpretations (Baayen, 2008). Fixed effects were checked with Type III sum of squares statistics using the Anova-function from the car package (Fox and Weisberg, 2019). To ensure a best possible approximation of the residuals’ distribution to the normal distribution, dependent variables were transformed as indicated by the Box-Cox transformation test from the MASS package (Box and Cox, 1964; Venables and Ripley, 2002). For all eye tracking variables, exclusion of extreme values followed a stepwise procedure. First, an absolute criterion in form of an upper threshold was applied based on the visual inspection of the distribution of each dependent variable. Second, extreme values were defined based on intercepts-only models including only crossed random effects for subjects and items. Since no missing values existed, the relative criterion was set to two standard deviations from the mean. For the evaluative judgments, extreme values were defined based on the recorded reaction times (RTs in ms). The lower and upper thresholds were set to 500 and 20,000 ms, respectively. All statistical analyses are based on a 95% level of significance (α = 0.05). For the sake of conciseness, only fixed effects of the best-fitting model will be reported as they are directly relevant to our hypotheses. Results of the entire stepwise deletion procedure are provided in the Supplementary Tables S4S11.

Results

Descriptive Statistics

To illustrate effects of the experimental manipulation, descriptive statistics are provided for each valence category (i.e., positive, negative, neutral) and stimulus domain (vignettes: Table 5, pictures: Table 6). As expected, ratings of Perceived Valence and Arousal differed between valence categories. For both stimulus domains, lowest mean valence ratings were observed for the negative, followed by the neutral, and lastly the positive valence category. With respect to mean arousal ratings, the following rank order became evident for vignettes and pictures: positive < neutral < negative. As indicated by the minimum and maximum values, each valence category attracted a wide range of individual ratings on both scales. For example, negatively valenced pictures attracted subjective valence ratings ranging from one to eight, with the maximum value indicating a perceived positive valence (see Table 6). For both stimulus domains, correlations between ratings of Perceived Valence and Arousal were more pronounced within the negative valence category [pictures: rnegative = −0.61, t(838) = −22.34, p < 0.001, R2 = 0.37; vignettes: rnegative = −0.50, t(836) = −16.74, p < 0.001, R2 = 0.25] than within the positive one [pictures: rpositive = 0.09, t(838) = 2.57, p = 0.01, R2 < 0.01; vignettes: rpositive = 0.05, t(838) = 1.53, p = 0.13, R2 < 0.01]. The overall correlations indicated a highly negative linear correlation between Perceived Valence and Arousal for both pictures and vignettes [pictures: roverall = −0.66, t(1678) = −35.78, p < 0.001, R2 = 0.43; vignettes: roverall = −0.67, t(1676) = −36.80, p < 0.001, R2 = 0.45].

TABLE 5
www.frontiersin.org

Table 5. Descriptive statistics for the vignettes based on the eye tracking study.

TABLE 6
www.frontiersin.org

Table 6. Descriptive statistics for the pictures based on the eye tracking study.

With respect to the supralexical eye tracking parameter Reading Speed, we observed fastest reading for positive, followed by negative, and lastly emotionally neutral vignettes. The same rank order was observed at the lexical level (i.e., mean FFD, mean TRT). Regarding the pictorial stimuli, average values of Mean Saccade Amplitude were shortest for the positive, followed by the neutral, and lastly negative valence category. For Mean Fixation Duration, average values suggested the following rank order: negative < positive < neutral. All valence groups attracted, on average, ten to eleven fixations.

Evaluative Judgments

Mean RTs for Valence Rating did not significantly differ between pictorial (M = 2417.23, SD = 1436.76) and textual (M = 2491.15, SD = 1841.54) materials [t(3356) = 1.30, p = 0.19, R2 < 0.01]. For Arousal Rating [t(3356) = 2.03, p = 0.04, R2 < 0.01], pictures (M = 2413.37, SD = 2032.61) were, on average, rated faster than vignettes (M = 2547.64, SD = 1800.41). Moreover, mean RTs showed significant differences between emotionally positive and negative stimuli. For Valence Rating [t(3356) = −3.17, p < 0.001, R2 < 0.01], positive stimuli (M = 2348.77, SD = 1478.23) were, on average, rated faster than negative ones (M = 2559.69, SD = 1802.97). For Arousal Rating [t(3356) = 2.46, p = 0.01, R2 < 0.01], the opposite pattern was found (Mnegative = 2398.98, SDnegative = 2128.59, Mpositive = 2561.85, SDpositive = 1684.45).

Based on the optimal lambda suggested by the Box-Cox transformation test (λ = 0.46), Valence Rating was sqrt-transformed. The exclusion of extreme values as indicated by reaction times reduced data points by 0.06% (3,356 remaining data points). Following the stepwise elimination procedure (cf. Supplementary Material), the identified best-fitting model (AIC = 884.77, BIC = 927.60, log-likelihood = −435.39) included Valence Category, Stimulus Domain, and the interaction between both variables as fixed effects. The analysis yielded a statistically significant main effect of Valence Category [χ2(1,N = 3,356) = 1558.73, p < 0.001] but not Stimulus Domain [χ2(1,N = 3,356) = 0.25, p = 0.62]. On average, negative stimuli (M = 2.12, SD = 1.04) were rated more negatively than positive stimuli (M = 7.47, SD = 1.18).

Due to the significant interaction (see Figure 2) between Valence Category and Stimulus Domain [χ2(1,N = 3,356) = 29.41, p < 0.001], the main effect of Stimulus Domain was further explored within the subsets of positively and negatively valenced stimuli. The analysis indicated significant main effects of Stimulus Domain for both the negative [χ2(1,N = 1,677) = 11.49, p < 0.001] and positive stimuli [χ2(1,N = 1,679) = 33.43, p < 0.001]. Thus, emotionally positive vignettes (M = 7.61, SD = 1.07) were, on average, rated more positively than their pictorial counterparts (M = 7.33, SD = 1.27). The same superiority was observed within the negative valence category. Hence, emotionally negative vignettes (M = 2.05, SD = 0.97) were, on average, rated more negatively than emotionally negative pictures (M = 2.20, SD = 1.11).

FIGURE 2
www.frontiersin.org

Figure 2. Interaction effect of Valence Category (i.e., negative, positive) and Stimulus Domain (i.e., pictorial, textual) on Valence Rating (assessed on a 9-point rating scale). Error bars denote one standard error from the mean. Descriptive statistics are as follows. Negative valence category: Mpictorial = 2.20, SDpictorial = 1.11, Mtextual = 2.05, SDtextual = 0.97; positive valence category: Mpictorial = 7.33, SDpictorial = 1.27, Mtextual = 7.61, SDtextual = 1.07.

With respect to ratings of Perceived Arousal, values were sqrt-transformed as indicated by the Box-Cox transformation test (λ = 0.71). Based on the reaction times, 13 extreme values (0.39%) were identified and subsequently excluded (3,345 remaining data points). The backward-reduction of fixed effects resulted in a random-intercepts model (AIC = 3462.4, BIC = 3492.9, log-likelihood = −1726.2) with Valence Category as sole predictor [χ2(1,N = 3,345) = 461.74, p < 0.001] indicating that emotionally negative stimuli (M = 6.44, SD = 1.87) were, on average, rated as more arousing than emotionally positive ones (M = 3.07, SD = 2.00).

Eye Movements in Reading

Due to the rightward skewed distribution, Reading Speed was log-transformed (Box-Cox transformation test: λ = −0.14). The absolute criterion for the identification of extreme values was set to 1,000 wpm (exclusion of three data points). Fifty-nine further data points were excluded based on the relative criterion (in total: 3.69%; 1,616 remaining data points). Following the backward-elimination procedure, the best-fitting model (AIC = −1488.6, BIC = −1445.5, log-likelihood = 752.31) suggested significant main effects of Valence Rating [χ2(1,N = 1,616) = 4.36, p = 0.04], Arousal Rating [χ2(1,N = 1,616) = 3.85, p = 0.05], and Immersion Potential Rating [χ2(1,N = 1,616) = 4.44, p = 0.04]. The latter effect indicated a positive linear relationship between ratings of Immersion Potential and Reading Speed with faster reading in case of higher ratings on Immersion Potential (see Figure 3).

FIGURE 3
www.frontiersin.org

Figure 3. Main effect of Immersion Potential Rating on Reading Speed (in wpm). Immersion Potential was evaluated as one text characteristic in the pilot studies and rated on a 5-point rating scale.

Since the interaction between Valence and Arousal Rating proved to be statistically significant [χ2(1,N = 1,616) = 7.84, p = 0.01], we further analyzed simple main effects by splitting the data based on Arousal Rating into two quantiles using the quantcut-function from the gtools package (Warnes et al., 2018). In this manner, the main effect of Valence Rating could be explored within two artificially constructed factor levels of Arousal Rating, one subset representing the low- (interval: [1–5]; N = 884) and the other the high-arousal group (interval: (5–9], N = 732). The main effect of Valence Rating reached statistical significance within the low- [χ2(1,N = 884) = 8.57, p = 0.003] but not high-arousal [χ2(1,N = 732) = 0.02, p = 0.89] subset (see Figure 4). In the low-arousal subset, positively valenced vignettes were, on average, read faster than negatively valenced ones. The main effect of Immersion Potential Rating remained significant within the high- [χ2(1,N = 732) = 4.44, p = 0.04] but not low-arousal group [χ2(1,N = 884) = 3.12, p = 0.08]. It should be noted that the results of the linear mixed-effects models within the two subsets have to be treated with caution. Since Valence and Arousal Rating were, in general, highly correlated, the artificially created arousal subsets possessed items disproportionally distributed over the valence categories, e.g., the high-arousal subset clearly contained more negatively than positively valenced vignettes.

FIGURE 4
www.frontiersin.org

Figure 4. Interaction effect of Valence and Arousal Rating on Reading Speed (in wpm). Valence and Arousal Rating were evaluated on 9-point rating scales by participants of the eye tracking study. Arousal Rating was split into two factor levels (i.e., low versus high) using the quantcut-function from the gtools package (Warnes et al., 2018). Colored areas indicate the 95% confidence interval of each fitted line.

For mean FFD, values were again transformed as indicated by the Box-Cox transformation test (λ = 1.43; 1/mean FFD). As absolute criterion, an upper threshold of 300 ms was applied (exclusion of one data point). Fifty-three further data points were excluded based on the relative criterion (in total: 3.22%; 1,624 remaining data points). Following stepwise model comparisons, the random-intercepts model with solely random effects was identified as best-fitting model (AIC = −21595, BIC = −21573, log-likelihood = 10802). Consequently, none of the considered predictors proved to be of explanatory value for mean FFD.

With respect to mean TRT, a sqrt-transformation was applied due to the rightward skewed distribution (Box-Cox transformation test: λ = −0.46). Values over 500 ms were identified as extreme values and subsequently excluded (five data points). Based on the relative criterion, 62 additionally data points were removed (in total: 3.99%; 1,611 remaining data points). The backward-elimination procedure identified the intercepts-only model with Valence Rating as only predictor as best-fitting model (AIC = −13290, BIC = −13263, log-likelihood = 6650.0). The statistically significant main effect of Valence Rating [χ2(1,N = 1,611) = 6.05, p = 0.01] indicated that mean TRTs tended to decrease with increasing Valence Rating (see Figure 5).

FIGURE 5
www.frontiersin.org

Figure 5. Main effect of Valence Rating on mean Total Reading Time (mean TRT in ms). Valence Rating was evaluated on a 9-point rating scale by participants of the eye tracking study.

Eye Movements in Picture Viewing

As indicated by the Box-Cox transformation test (λ = 0.38), values for Mean Saccade Amplitude were sqrt-transformed. The absolute criterion for the exclusion of extreme values was set to 10° of visual angle (exclusion of four data points). Based on the relative criterion, 46 further data points were excluded (in total: 2.98%; 1,630 remaining data points). Values of Total Number of Fixations were squared as suggested by the Box-Cox transformation test (λ = 1.76). The upper threshold was set to a total number of 15 fixations (exclusion of five data points). Within the second step, 21 data points were additionally excluded (in total: 1.55%; 1,654 remaining data points). Lastly, Mean Fixation Duration was transformed due to the rightward skewed distribution (Box-Cox transformation test: λ = −1.23; 1/mean fixation duration). An upper limit of 800 ms was applied as absolute criterion for the identification of extreme values (exclusion of nine data points). Based on the relative criterion, 63 further data points were excluded (in total: 4.29%; 1,608 remaining data points).

For all three dependent variables, the intercepts-only models with solely random effects were identified as best-fitting models (for Mean Saccade Amplitude: AIC = −706.25, BIC = −684.66, log-likelihood = 357.12; for Total Number of Fixations: AIC = 15887, BIC = 15909, log-likelihood = −7939.5; for Mean Fixation Duration: AIC = −19644, BIC = −19622, log-likelihood = 9826.0). Hence, none of our predictors seemed to be suitable for the prediction of executed fixations and saccadic movements.

Discussion

The aim of the present study was to examine effects of emotional content on subjective ratings of Perceived Valence and Arousal, eye movements in reading, and eye movements in picture viewing. With this aim, we asked a group of 42 participants to assess the emotional valence and arousal of 40 emotionally valenced (i.e., positive, negative) vignettes and pictures, respectively. To the best of our knowledge, this is the first study including a cross-domain comparison between more complex verbal materials (i.e., vignettes) and pictures providing matching semantic information.

As indicated by the reported descriptive statistics, the experimental manipulation of textual and pictorial valences proved to be successful. Lowest ratings of Perceived Valence were observed for negative, followed by neutral, and lastly positive stimuli. Furthermore, the wide range of individual ratings collected for each valence category stressed the necessity to go beyond the simplified categorical operationalization of emotional valence. Emotionally positive stimuli were, on average, rated as less arousing than emotionally negative stimuli. In line with previous studies on words (e.g., Võ et al., 2009; Soares et al., 2012; Söderholm et al., 2013; Montefinese et al., 2014), sentences (e.g., Pinheiro et al., 2017), and pictures (e.g., Verschuere et al., 2001; Lang et al., 2008; Dufey et al., 2011; Soares et al., 2015), the linear relationship between Perceived Valence and Arousal was found to be more pronounced for negatively compared to positively valenced stimuli. This observed asymmetry might possibly be due to the absence of erotic and thus high-arousal positive pictures in the NAPS database (cf. Marchewka et al., 2014). However, the strong correlation suggests that the two affective dimensions can rarely be studied apart from each other when focusing on effects of emotion induction in ecologically valid materials (cf. Citron, 2012).

With respect to the cross-domain comparison, vignettes attracted, on average, more extreme valence ratings than pictures supporting the assumed superiority of textual compared to pictorial materials. As expected, no effect of Stimulus Domain on Perceived Arousal was observed indicating that textual stimuli are able to induce arousal levels that are comparable to the ones elicited through pictures. Hence, the present study provides further evidence that verbal stimuli are at least as suitable for the induction of emotions as pictures. More specifically, the previously reported superior valence effects of emotionally positive words and phrases (e.g., Schlochtermeier et al., 2013; Tempel et al., 2013; Bayer and Schacht, 2014) applied not only to more complex linguistic material but also to negatively valenced ones. In contrast to Tempel et al. (2013), reaction times for valence ratings showed no significant differences between stimulus domains. Thus, while judgments of emotional valence required comparable processing times for both stimulus domains (cf. Schlochtermeier et al., 2013), vignettes attracted more extreme valence ratings than pictures.

Regarding emotion effects in reading, faster reading times for both vignettes perceived as emotionally positive (i.e., supralexical level) and their constituting content words (i.e., lexical level) were hypothesized. At the supralexical level, additional analyses exploring the significant interaction between Valence and Arousal Rating on Reading Speed revealed that the main effect of Perceived Valence applied to low-arousal vignettes, exclusively. Hence, vignettes rated as slightly arousing, emotionally positive were, on average, read faster than vignettes perceived as slightly arousing, emotionally negative. Reading speeds for high-arousal vignettes were found to be comparable to the one for low-arousal, emotionally positive vignettes independent of their perceived valences. Hence, vignettes perceived as emotionally negative required high arousal levels to show a similar processing advantage as vignettes perceived as emotionally positive. The observed effect conforms to previous findings from lexical decision tasks showing that the processing of emotionally negative, but not positive, words depends on their arousal levels. More specifically, reactions to negatively valenced words were reported to be slower for low- versus high-arousal stimuli (e.g., Nakic et al., 2006; Hofmann et al., 2009; Recio et al., 2014). In other words, in line with studies examining reaction latencies, valence effects on Reading Speed were absent for vignettes perceived as highly arousing.

To our knowledge, there is only one further study primarily focusing on effects of textual valence on eye movements during reading of narratives. In accordance with our results, Ballenghein et al. (2019) were able to find shortest reading times for emotionally positive texts. However, statistically significant differences were restricted to the comparison of mean fixation durations between emotionally positive and neutral narratives. Hence, the present study extends their findings by revealing statistically significant differences between positively and negatively valenced vignettes when using individual ratings instead of valence categories. Interestingly, Ballenghein et al. (2019) likewise observed that reading in emotionally negative texts is influenced by arousal. More specifically, the authors reported significantly shorter mean fixation durations for high-arousal, emotionally negative texts compared to their medium-arousal counterparts.

In accordance with the NCPM (Jacobs, 2011, 2015b), vignettes associated with higher ratings of Immersion Potential attracted, on average, faster reading. The multi-dimensional phenomenon of immersive reading is related to various factors including characteristics of the text (e.g., easy-to-recognize words; Jacobs, 2015b), the context (e.g., action-oriented descriptions; Kuijpers et al., 2014), and the reader (e.g., identification with the protagonist; Jacobs, 2015b). Since the vignettes were constructed to be easily understandable, to emotionally engage the reader, to enhance the identification with the protagonist, and likely activated familiar situation models as their contents were based on pictures of daily situations, the overall high ratings on Immersion Potential are not surprising. Hence, it has to be considered that the reported effect of Immersion Potential is based on a comparably low value range (i.e., range: 3.89–4.6 on a 5-point rating scale). Immersion Potential was neither explicitly manipulated nor systematically measured with commonly used scales such as the Story World Absorption Scale (Kuijpers et al., 2014). Thus, while the reported significant main effect of Immersion Potential is in line with the assumptions of the NCPM, the effect needs to be replicated by materials especially constructed or selected to study effects of immersion.

The effect of Perceived Valence reported at the supralexical level could also be observed at the lexical one. As expected, content words of vignettes perceived as emotionally positive attracted, on average, faster reading as indicated by shorter mean TRTs. However, effects of (perceived) emotional valence were missing for mean FFD suggesting an absence of valence-specific effects at early processing stages. Comparable findings are provided by EEG studies examining the time course of emotional word processing (Citron, 2012, for review). In this regard, effects of emotional content have been shown to appear at early and later processing stages (e.g., Herbert et al., 2006, 2008; Kissler et al., 2007; Hofmann et al., 2009; Schacht and Sommer, 2009b; Bayer et al., 2010). Whereas the early effect has been assumed to be predominantly governed by arousal, valence-driven modulations have been put forward in explanations of the later impact indicating a deeper encoding of positive stimuli (e.g., Delplanque et al., 2004; Herbert et al., 2006, 2008; Kiefer et al., 2006; Conroy and Polich, 2007; Kissler et al., 2009). Whether the here reported shorter total reading times for as positive perceived vignettes and their constituting content words are in line with the assumed deeper encoding for positive compared to negative words is still an open question for future empirical research. So far, our results conform to the positivity bias during meaning construction (cf. Jacobs et al., 2015; Lüdtke and Jacobs, 2015) assuming an easier semantic integration and construction of situation models for verbal materials including emotionally positive compared to negative words. Further studies have to be conducted to explore under which circumstances such processing advantage may cause deeper encoding.

With respect to eye movements in picture viewing, descriptive statistics revealed that emotionally negative pictures tended to attract slightly shorter fixations and longer saccades compared to emotionally positive ones. However, neither Perceived Valence nor Arousal were of explanatory value for the three examined eye tracking parameters. Hence, emotion effects were absent for both fixation (i.e., Mean Fixation Duration, Total Number of Fixations) and scan (i.e., Mean Saccade Amplitude) patterns as hypothesized and previously reported by studies with a comparable experimental design (Bradley et al., 2011; Niu et al., 2012).

Limitations and Future Directions

Building on the lastly mentioned pictorial materials, results of the present study have to be interpreted within the framework of the particular viewing paradigm and performed analysis. Hence, it remains an open question whether valence-specific effects would be present when analyzing certain areas of interest (e.g., focal object versus background; Yang et al., 2012) or focusing on temporal dynamics in picture viewing. With respect to the latter, differences between positive and negative valences have been reported when emotional and neutral pictures were presented at the same time competing for attentional resources (e.g., Calvo and Avero, 2005; Simola et al., 2013).

Complementary to studies focusing on single word processing (e.g., Kousta et al., 2009; Briesemeister et al., 2011; Lüdtke and Jacobs, 2015), (perceived) positive valence showed comparable facilitative effects when manipulated at the text level. This hypothesized effect conforms to the aforementioned high correlation between the text and lexical levels (Bestgen, 1994; Whissell, 2003; Hsu et al., 2015b; Jacobs, 2015b). In general, the present distribution of Reading Speed in wpm indicated that our participants read rather fast (M = 367.48, SD = 123.97), with an average of 330 wpm being considered as a threshold for fast reading (cf. Rayner et al., 2010). This observation emphasizes the low cognitive demands and high comprehensibility of our linguistic materials (cf. Cupchik et al., 1998; Søvik et al., 2000; Rayner et al., 2006; Liversedge et al., 2011; Lüdtke et al., 2019).

In this context, the influential role of our evaluative judgment task ought to be considered. Hence, both the comparable low task demands (e.g., no comprehension questions) and the strong focus on clearly emotionally valenced materials (i.e., no neutral stimuli) might have influenced participants’ reactions and compliance with the task (Westermann et al., 1996; Estes and Verges, 2008). Although it might have been possible that participants performed the task without having read the vignettes entirely, visual inspections of fixation patterns indicated that they did not stop reading after the first sentences. Apart from the fact that evaluative judgment tasks have largely been applied in the context of emotion induction (e.g., Schupp et al., 1997; Bradley et al., 2001; Calvo and Lang, 2004; Nummenmaa et al., 2006; Brunyé et al., 2011; Schlochtermeier et al., 2013; Mouw et al., 2019; Child et al., 2020), effects of emotion have likewise been reported for tasks not explicitly focusing on the affective content (e.g., Schacht and Sommer, 2009a; Rellecke et al., 2011; Lüdtke and Jacobs, 2015). Consequently, it has been shown that encoding of emotional valence takes place even when affective processing is not necessary to perform the task. Nevertheless, up to which level the here reported effects of perceived emotional valence could be observed in other reading situations remains an open empirical question.

Since the focus of the present study was on the induction of emotions in a cross-domain comparison, neutral stimuli were largely neglected. Consequently, future research has to reveal whether the reported effects of Perceived Valence remain stable when including neutral materials. However, it should be noted that emotional valence is highly prevalent in objects of everyday life making it difficult to select appropriate neutral stimuli (cf. Lebrecht et al., 2012). With respect to the comparison between vignettes and pictures, the slightly different presentation mode has to be considered. Whereas vignettes required self-paced reading, pictures were presented within a commonly used 3-s time interval (e.g., Calvo and Lang, 2004; Calvo and Avero, 2005; Nummenmaa et al., 2006; Yang et al., 2012; Bayer and Schacht, 2014; Marchewka et al., 2014). However, we are convinced that the reported domain-specific effects are stronger related to differences associated with the processing of pictures compared to vignettes than to different presentation modes.

As a step toward the use of more ecologically valid stimuli in psychological reading research, reading behavior was examined in self-constructed vignettes. Although they do not represent natural reading materials such as excerpts of well-known books (e.g., Hsu et al., 2014, 2015a,b,c), their major advantage concerns the opportunity to easily and systematically control or rather manipulate various variables of interest (e.g., choice of words, number of protagonists). The thus acquired results can consequently be used to inform about potentially influential covariates that ought to be considered in future studies.

As mentioned in the introduction, the simplest model about the valence of supralexical units like vignettes would assume that the global valence is a (linear or non-linear) function of the valence values of its constituents. Following this idea, mean TRT and mean FFD were computed by averaging the fixation durations of all content words constituting a vignette. Consequently, lexical word features such as length, frequency, word position and repetition were neglected (Raney et al., 2000; Kuperman et al., 2010, for reviews). Moreover, since the present study aimed to investigate natural reading processes, our study material could only be controlled for a limited set of variables. However, current computational quantitative narrative and sentiment analysis tools such as “QNArt” and “SentiArt” (Jacobs, 2018a, b, 2019) provide the possibility of quantitatively describing words on a wide range of lexical affective-semantic features. To investigate their possibly interactive impact on eye movements in reading, other approaches than the applied linear mixed-effects models are of inevitable necessity.

In this context, machine learning assisted methods of predictive modeling offer a promising and valuable alternative or complementary perspective (e.g., Yarkoni and Westfall, 2017; Vijayakumar and Cheung, 2018). Since the approach allows working with many intercorrelated variables and non-linear data patterns (Goodman et al., 2016; Cheung and Jak, 2018), it is particularly well suited for analyzing effects of literary texts on reading behavior (Jacobs, 2018a, b; Xue et al., 2019). The combined application of QNA data and machine learning algorithms such as neural networks or decision trees has yielded promising results in previous research on the beauty of words (Jacobs, 2017), the literariness of metaphors (Jacobs and Kinder, 2017, 2018), or the comprehensibility and emotion potential of poetic texts (Jacobs and Lüdtke, 2017; Jacobs et al., 2017; Jacobs, 2018a, b; Xue et al., 2019). Future research will have to provide comparable analyses for reading in prose (e.g., vignettes).

Conclusion

Considering that our results indicated that emotional vignettes are able to induce stronger valence effects than their pictorial counterparts, it can be proposed that the present study provides further evidence for the suitability of textual materials in the area of emotion induction. Furthermore, this is the first eye tracking study showing a statistically significant difference in effects of positive and negative, and not only of emotional and neutral texts. In this context, results from previous experiments using isolated words and sentences could be replicated at the supralexical level: perceived positive text valence attracts shorter reading times than perceived negative valence at both the supralexical and lexical level.

Data Availability Statement

The datasets generated for this study are available on request to the corresponding author.

Ethics Statement

The studies involving human participants were reviewed and approved by the Ethics Committee of the Department of Education and Psychology at Freie Universität Berlin. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

FU and JL contributed to the conception and design of the study. FU developed the test stimuli and collected the data. FU performed the statistical analysis in consultation with AJ and JL. FU wrote the first draft of the manuscript. All authors contributed to the manuscript revision, read and approved the submitted version.

Funding

Open Access Funding was provided by the Freie Universität Berlin.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank Ilai Jess for his help during the construction of the vignettes and the collection of the data of the pilot study.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.00905/full#supplementary-material

Footnotes

  1. ^ http://www.sr-research.com/experiment-builder
  2. ^ Normative ratings of valence were originally collected on a continuous sliding scale ranging from 1 (very negative) to 9 (very positive), with 5 indicating a neutral valence.
  3. ^ http://www.soscisurvey.de/
  4. ^ Based on ratings from 13 participants, the following statistics can be reported for neutral vignettes: Valence: M = 4.85, SD = 0.93; Arousal: M = 4.79, SD = 0.89; Comprehensibility: M = 4.77, SD = 0.54; Immersion Potential: M = 3.64, SD = 1.16; Emotion Induction Potential: M = 2.46, SD = 1.14.
  5. ^ Previous studies reported effects of participants’ mood on eye movements in both picture viewing (Wadlinger and Isaacowitz, 2006) and reading (Scrimin and Mason, 2015).
  6. ^ http://www.sr-research.com/data-viewer/
  7. ^ https://www.jmp.com/de_de/home.html
  8. ^ https://cran.r-project.org/

References

Altmann, U., Bohrn, I. C., Lubrich, O., Menninghaus, W., and Jacobs, A. M. (2012). The power of emotional valence—from cognitive to affective processes in reading. Front. Hum. Neurosci. 6:192. doi: 10.3389/fnhum.2012.00192

CrossRef Full Text | Google Scholar

Altmann, U., Bohrn, I. C., Lubrich, O., Menninghaus, W., and Jacobs, A. M. (2014). Fact vs fiction– how paratextual information shapes our reading processes. Soc. Cogn. Affect. Neurosci. 9, 22–29. doi: 10.1093/scan/nss098

CrossRef Full Text | Google Scholar

Anthony, L. (2015). TagAnt (Version 1.2.0) [Computer Software]. Tokyo: Waseda University.

Google Scholar

Azizian, A., Watson, T. D., Parvaz, M. A., and Squires, N. K. (2006). Time course of processes underlying picture and word evaluation: an event-related potential approach. Brain Topogr. 18, 213–222. doi: 10.1007/s10548-006-0270-9

CrossRef Full Text | Google Scholar

Baayen, R. H. (2008). Analyzing Linguistic Data: A Practical Introduction to Statistics using R, 1st Edn, Cambridge: Cambridge University Press.

Google Scholar

Baayen, R. H., Davidson, D. J., and Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 59, 390–412. doi: 10.1016/j.jml.2007.12.005

CrossRef Full Text | Google Scholar

Bailey, H., and Zacks, J. M. (2011). Literature and event understanding. Sci. Study Literat. 1, 72–78. doi: 10.1075/ssol.1.1.07bai

CrossRef Full Text | Google Scholar

Ballenghein, U., Megalakaki, O., and Baccino, T. (2019). Cognitive engagement in emotional text reading: concurrent recordings of eye movements and head motion. Cogn. Emot. 33, 1–13. doi: 10.1080/02699931.2019.1574718

CrossRef Full Text | Google Scholar

Barr, D. J., Levy, R., Scheepers, C., and Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: keep it maximal. J. Mem. Lang. 68, 255–278. doi: 10.1016/j.jml.2012.11.001

CrossRef Full Text | Google Scholar

Bates, D., Kliegl, R., Vasishth, S., and Baayen, H. (2015a). Parsimonious mixed models. arXiv [Preprint]. Available online at: http://arxiv.org/abs/1506.04967 (accessed April 6, 2020).

Google Scholar

Bates, D., Maechler, M., Bolker, B., and Walker, S. (2015b). Fitting linear mixed-effects models Using lme4. J. Statist. Softw. 67, 1–48. doi: 10.18637/jss.v067.i01

CrossRef Full Text | Google Scholar

Bayer, M., and Schacht, A. (2014). Event-related brain responses to emotional words, pictures, and faces–a cross-domain comparison. Front. Psychol. 5:1106. doi: 10.3389/fpsyg.2014.01106

CrossRef Full Text | Google Scholar

Bayer, M., Sommer, W., and Schacht, A. (2010). Reading emotional words within sentences: the impact of arousal and valence on event-related potentials. Intern. J. Psychophysiol. 78, 299–307. doi: 10.1016/j.ijpsycho.2010.09.004

CrossRef Full Text | Google Scholar

Bestgen, Y. (1994). Can emotional valence in stories be determined from words? Cogn. Emot. 8, 21–36. doi: 10.1080/02699939408408926

CrossRef Full Text | Google Scholar

Bohn-Gettler, C. M. (2019). Getting a grip: the PET framework for studying how reader emotions influence comprehension. Discour. Process. 56, 386–401. doi: 10.1080/0163853X.2019.1611174

CrossRef Full Text | Google Scholar

Boston, M. F., Hale, J., Kliegl, R., Patil, U., and Vasishth, S. (2008). Parsing costs as predictors of reading difficulty: an evaluation using the potsdam sentence corpus. J. Eye Mov. Res. 2, 1–12. doi: 10.16910/jemr.2.1.1

CrossRef Full Text | Google Scholar

Box, G. E., and Cox, D. R. (1964). An analysis of transformations. J. R. Statist. Soc. 26, 211–243.

Google Scholar

Bradley, M. M., Codispoti, M., Cuthbert, B. N., and Lang, P. J. (2001). Emotion and motivation I: defensive and appetitive reactions in picture processing. Emotion 1, 276–298. doi: 10.1037/1528-3542.1.3.276

CrossRef Full Text | Google Scholar

Bradley, M. M., Hamby, S., Löw, A., and Lang, P. J. (2007). Brain potentials in perception: picture complexity and emotional arousal. Psychophysiology 44, 364–373. doi: 10.1111/j.1469-8986.2007.00520.x

CrossRef Full Text | Google Scholar

Bradley, M. M., Houbova, P., Miccoli, L., Costa, V. D., and Lang, P. J. (2011). Scan patterns when viewing natural scenes: emotion, complexity, and repetition. Psychophysiology 48, 1544–1553. doi: 10.1111/j.1469-8986.2011.01223.x

CrossRef Full Text | Google Scholar

Bradley, M. M., and Lang, P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59. doi: 10.1016/0005-7916(94)90063-9

CrossRef Full Text | Google Scholar

Bradley, M. M., and Lang, P. J. (1999). Affective Norms For English Words (ANEW): Instruction Manual And Affective Ratings (Technical Report C-1). Gainesville: University of Florida.

Google Scholar

Bradley, M. M., and Lang, P. J. (2007). Affective Norms for English Text (ANET): Affective Ratings Of Text And Instruction Manual (Technical Report D-1). Gainesville: University of Florida.

Google Scholar

Bradley, M. M., Miccoli, L., Escrig, M. A., and Lang, P. J. (2008). The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45, 602–607. doi: 10.1111/j.1469-8986.2008.00654.x

CrossRef Full Text | Google Scholar

Briesemeister, B. B., Kuchinke, L., and Jacobs, A. M. (2011). Discrete emotion effects on lexical decision response times. PLoS One 6:e23743. doi: 10.1371/journal.pone.0023743

CrossRef Full Text | Google Scholar

Brophy, J., and McCaslin, M. (1992). Teachers’ reports of how they perceive and cope with problem students. Element. Sch. J. 93, 3–68. doi: 10.1086/461712

CrossRef Full Text | Google Scholar

Brunyé, T. T., Ditman, T., Mahoney, C. R., Augustyn, J. S., and Taylor, H. A. (2009). When you and I share perspectives: pronouns modulate perspective taking during narrative comprehension. Psychol. Sci. 20, 27–32. doi: 10.1111/j.1467-9280.2008.02249.x

CrossRef Full Text | Google Scholar

Brunyé, T. T., Ditman, T., Mahoney, C. R., and Taylor, H. A. (2011). Better you than I: perspectives and emotion simulation during narrative comprehension. J. Cogn. Psychol. 23, 659–666. doi: 10.1080/20445911.2011.559160

CrossRef Full Text | Google Scholar

Budimir, S., and Palmović, M. (2011). Gaze differences in processing pictures with emotional content. Colleg. Antropol. 35, 17–23.

Google Scholar

Bühler, K. (1934). Sprachtheorie: Die Darstellungsfunktion der Sprache. Jena: Fischer.

Google Scholar

Calvo, M. G., and Avero, P. (2005). Time course of attentional bias to emotional scenes in anxiety: gaze direction and duration. Cogn. Emot,. 19, 433–451. doi: 10.1080/02699930441000157

CrossRef Full Text | Google Scholar

Calvo, M. G., Avero, P., and Lundqvist, D. (2006). Facilitated detection of angry faces: Initial orienting and processing efficiency. Cogn. Emot. 20, 785–811. doi: 10.1080/02699930500465224

CrossRef Full Text | Google Scholar

Calvo, M. G., and Lang, P. J. (2004). Gaze patterns when looking at emotional pictures: motivationally biased attention. Motiv. Emot. 28, 221–243. doi: 10.1023/B:MOEM.0000040153.26156.ed

CrossRef Full Text | Google Scholar

Camras, L. A., Grow, J. G., and Ribordy, S. C. (1983). Recognition of emotional expression by abused children. J. Clin. Child Adoles. Psychol. 12, 325–328. doi: 10.1080/15374418309533152

CrossRef Full Text | Google Scholar

Carniglia, E., Caputi, M., Manfredi, V., Zambarbieri, D., and Pessa, E. (2012). The influence of emotional picture thematic content on exploratory eye movements. J. Eye Mov. Res. 5, 1–9. doi: 10.16910/jemr.5.4.4

CrossRef Full Text | Google Scholar

Cheung, M. W.-L., and Jak, S. (2018). Challenges of big data analyses and applications in psychology. Zeitschrift Psychol. 226, 209–211. doi: 10.1027/2151-2604/a000348

CrossRef Full Text | Google Scholar

Child, S., Oakhill, J., and Garnham, A. (2020). Tracking your emotions – an eye-tracking study on reader’s engagement with perspective during text comprehension. Q. J. Exp. Psychol. 174702182090556. doi: 10.1177/1747021820905561

CrossRef Full Text | Google Scholar

Christianson, S. -Å, Loftus, E. F., Hoffman, H., and Loftus, G. R. (1991). Eye fixations and memory for emotional events. J. Exp. Psychol. 17, 693–701. doi: 10.1037/0278-7393.17.4.693

CrossRef Full Text | Google Scholar

Citron, F. M. M. (2012). Neural correlates of written emotion word processing: a review of recent electrophysiological and hemodynamic neuroimaging studies. Brain Lang. 122, 211–226. doi: 10.1016/j.bandl.2011.12.007

CrossRef Full Text | Google Scholar

Clifton, C., and Staub, A. (2011). “Syntactic influences on eye movements during reading,” in The Oxford Handbook of Eye Movements, eds S. P. Liversedge, G. Gilchrist, and S. Everling (Oxford: Oxford University Press), 895–909.

Google Scholar

Clifton, C., Staub, A., and Rayner, K. (2007). “Eye movements in reading words and sentences,” in Eye Movements: A Window on Mind and Brain, eds R. P. G. van Gompel, M. H. Fischer, W. S. Murray, and R. L. Hill (Amsterdam: Elsevier), 341–371. doi: 10.1016/B978-008044980-7/50017-50013

CrossRef Full Text | Google Scholar

Conroy, M. A., and Polich, J. (2007). Affective valence and P300 when stimulus arousal level is controlled. Cogn. Emot. 21, 891–901. doi: 10.1080/02699930600926752

CrossRef Full Text | Google Scholar

Cupchik, G. C., and Laszlo, J. (1994). The landscape of time in literary reception: character experience and narrative action. Cogn. Emot. 8, 297–312. doi: 10.1080/02699939408408943

CrossRef Full Text | Google Scholar

Cupchik, G. C., Oatley, K., and Vorderer, P. (1998). Emotional effects of reading excerpts from short stories by James Joyce. Poetics 25, 363–377. doi: 10.1016/S0304-422X(98)90007-9

CrossRef Full Text | Google Scholar

Dan-Glauser, E. S., and Scherer, K. R. (2011). The Geneva affective picture database (GAPED): a new 730-picture database focusing on valence and normative significance. Behav. Res. Methods 43:468. doi: 10.3758/s13428-011-0064-1

CrossRef Full Text | Google Scholar

De Houwer, J., and Hermans, D. (1994). Differences in the affective processing of words and pictures. Cogn. Emot. 8, 1–20. doi: 10.1080/02699939408408925

CrossRef Full Text | Google Scholar

de Wied, M., Goudena, P. P., and Matthys, W. (2005). Empathy in boys with disruptive behavior disorders. J. Child Psychol. Psychiatry 46, 867–880. doi: 10.1111/j.1469-7610.2004.00389.x

CrossRef Full Text | Google Scholar

Delplanque, S., Lavoie, M. E., Hot, P., Silvert, L., and Sequeira, H. (2004). Modulation of cognitive processing by emotional valence studied through event-related potentials in humans. Neurosci. Lett. 356, 1–4. doi: 10.1016/j.neulet.2003.10.014

CrossRef Full Text | Google Scholar

Ding, J., Wang, L., and Yang, Y. (2015). The dynamic influence of emotional words on sentence processing. Cogn. Affect. Behav. Neurosci. 15, 55–68. doi: 10.3758/s13415-014-0315-6

CrossRef Full Text | Google Scholar

Dufey, M., Fernández, A. M., and Mayol, R. (2011). Adding support to cross-cultural emotional assessment: validation of the international affective picture system in a chilean sample. Univers. Psychol. 10, 521–533.

Google Scholar

Eilola, T. M., and Havelka, J. (2010). Affective norms for 210 British english and finnish nouns. Behav. Res. Methods 42, 134–140. doi: 10.3758/BRM.42.1.134

CrossRef Full Text | Google Scholar

Estes, Z., and Verges, M. (2008). Freeze or flee? Negative stimuli elicit selective responding. Cognition 108, 557–565. doi: 10.1016/j.cognition.2008.03.003

CrossRef Full Text | Google Scholar

Fiedler, K. (2011). Social Communication. New York, NY: Psychology Press.

Google Scholar

Filik, R., and Leuthold, H. (2013). The role of character-based knowledge in online narrative comprehension: evidence from eye movements and ERPs. Brain Res. 1506, 94–104. doi: 10.1016/j.brainres.2013.02.017

CrossRef Full Text | Google Scholar

Finch, J. (1987). The vignette technique in survey research. Sociology 21, 105–114. doi: 10.1177/0038038587021001008

CrossRef Full Text | Google Scholar

Findlay, J. M., and Gilchrist, I. D. (2003). Active Vision: The Psychology Of Looking And Seeing. Oxford: Oxford University Press.

Google Scholar

Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., and Dutton, K. (2000). Facial expressions of emotion: are angry faces detected more efficiently? Cogn. Emot. 14, 61–92. doi: 10.1080/026999300378996

CrossRef Full Text | Google Scholar

Fox, J., and Weisberg, S. (2019). An R Companion to Applied Regression, 3rd Edn, Thousand Oaks: SAGE Publications Inc.

Google Scholar

Frazier, L., and Rayner, K. (1982). Making and correcting errors during sentence comprehension: eye movements in the analysis of structurally ambiguous sentences. Cogn. Psychol. 14, 178–210. doi: 10.1016/0010-0285(82)90008-1

CrossRef Full Text | Google Scholar

Gavrilidou, M., DeMesquita, P. B., and Mason, E. J. (1993). Greek teachers’ judgments about the nature and severity of classroom problems. Sch. Psychol. Intern. 14, 169–180. doi: 10.1177/0143034393142006

CrossRef Full Text | Google Scholar

Gernsbacher, M. A., Goldsmith, H. H., and Robertson, R. R. (1992). Do readers mentally represent characters’ emotional states? Cogn. Emot. 6, 89–111. doi: 10.1080/02699939208411061

CrossRef Full Text | Google Scholar

Gerrards-Hesse, A., Spies, K., and Hesse, F. W. (1994). Experimental inductions of emotional states and their effectiveness: a review. Br. J. Psychol. 85, 55–78. doi: 10.1111/j.2044-8295.1994.tb02508.x

CrossRef Full Text | Google Scholar

Gillioz, C., Gygax, P., and Tapiero, I. (2012). Individual differences and emotional inferences during reading comprehension. Can. J. Exp. Psychol. 66, 239–250. doi: 10.1037/a0028625

CrossRef Full Text | Google Scholar

Gillioz, C., and Gygax, P. M. (2017). Specificity of emotion inferences as a function of emotional contextual support. Discour. Process. 54, 1–18. doi: 10.1080/0163853X.2015.1095597

CrossRef Full Text | Google Scholar

Glaser, W. R. (1992). Picture naming. Cognition 42, 61–105. doi: 10.1016/0010-0277(92)90040-O

CrossRef Full Text | Google Scholar

Goodman, S. N., Fanelli, D., and Ioannidis, J. P. A. (2016). What does research reproducibility mean? Sci. Transl. Med. 8:341s12. doi: 10.1126/scitranslmed.aaf5027

CrossRef Full Text | Google Scholar

Graesser, A. C., Singer, M., and Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychol. Rev. 101:371. doi: 10.1037/0033-295X.101.3.371

CrossRef Full Text | Google Scholar

Gygax, P., Garnham, A., and Oakhill, J. (2004). Inferring characters’ emotional states: can readers infer specific emotions? Lang. Cogn. Process. 19, 613–639. doi: 10.1080/01690960444000016

CrossRef Full Text | Google Scholar

Gygax, P., Oakhill, J., and Garnham, A. (2003). The representation of characters’ emotional responses: do readers infer specific emotions? Cogn. Emot. 17, 413–428. doi: 10.1080/02699930244000048

CrossRef Full Text | Google Scholar

Gygax, P., Tapiero, I., and Carruzzo, E. (2007). Emotion inferences during reading comprehension: what evidence can the self-pace reading paradigm provide? Discour. Process. 44, 33–50. doi: 10.1080/01638530701285564

CrossRef Full Text | Google Scholar

Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends Cogn. Sci. 7, 498–504. doi: 10.1016/j.tics.2003.09.006

CrossRef Full Text | Google Scholar

Henderson, R. R., Bradley, M. M., and Lang, P. J. (2014). Modulation of the initial light reflex during affective picture viewing. Psychophysiology 51, 815–818. doi: 10.1111/psyp.12236

CrossRef Full Text | Google Scholar

Herbert, C., Junghofer, M., and Kissler, J. (2008). Event related potentials to emotional adjectives during reading. Psychophysiology 45, 487–498. doi: 10.1111/j.1469-8986.2007.00638.x

CrossRef Full Text | Google Scholar

Herbert, C., Kissler, J., Junghöfer, M., Peyk, P., and Rockstroh, B. (2006). Processing of emotional adjectives: evidence from startle EMG and ERPs. Psychophysiology 43, 197–206. doi: 10.1111/j.1469-8986.2006.00385.x

CrossRef Full Text | Google Scholar

Hinojosa, J. A., Carretié, L., Valcárcel, M. A., Méndez-Bértolo, C., and Pozo, M. A. (2009). Electrophysiological differences in the processing of affective information in words and pictures. Cogn. Affect. Behav. Neurosci. 9, 173–189. doi: 10.3758/CABN.9.2.173

CrossRef Full Text | Google Scholar

Hinojosa, J. A., Méndez-Bértolo, C., and Pozo, M. A. (2010). Looking at emotional words is not the same as reading emotional words: behavioral and neural correlates. Psychophysiology 47, 748–757. doi: 10.1111/j.1469-8986.2010.00982.x

CrossRef Full Text | Google Scholar

Hofmann, M. J., Kuchinke, L., Tamm, S., Võ, M. L., and Jacobs, A. M. (2009). Affective processing within 1/10th of a second: high arousal is necessary for early facilitative processing of negative but not positive words. Cogn. Affect. Behav. Neurosci. 9, 389–397. doi: 10.3758/9.4.389

CrossRef Full Text | Google Scholar

Hsu, C.-T., Conrad, M., and Jacobs, A. M. (2014). Fiction feelings in Harry Potter: haemodynamic response in the mid-cingulate cortex correlates with immersive reading experience. Neuroreport 25, 1356–1361. doi: 10.1097/WNR.0000000000000272

CrossRef Full Text | Google Scholar

Hsu, C. T., Jacobs, A. M., Altmann, U., and Conrad, M. (2015a). The magical activation of left amygdala when reading Harry Potter: an fMRI study on how descriptions of supra-natural events entertain and enchant. PLoS One 10:e0118179. doi: 10.1371/journal.pone.0118179

CrossRef Full Text | Google Scholar

Hsu, C. T., Jacobs, A. M., Citron, F. M., and Conrad, M. (2015b). The emotion potential of words and passages in reading Harry Potter–An fMRI study. Brain Lang. 142, 96–114. doi: 10.1016/j.bandl.2015.01.011

CrossRef Full Text | Google Scholar

Hsu, C. T., Jacobs, A. M., and Conrad, M. (2015c). Can Harry Potter still put a spell on us in a second language? An fMRI study on reading emotion-laden literature in late bilinguals. Cortex 63, 282–295. doi: 10.1016/j.cortex.2014.09.002

CrossRef Full Text | Google Scholar

Huebner, E. S. (1991). Bias in special education decisions: the contribution of analogue research. Sch. Psychol. Q. 6, 50–65. doi: 10.1037/h0088240

CrossRef Full Text | Google Scholar

Hyönä, J., Lorch, R. F. Jr., and Rinck, M. (2003). “Eye movement measures to study global text processing,” in The Mind’s Eye: Cognitive and Applied Aspects Of Eye Movement Research, eds J. Hyönä, R. Radach, and H. Deubel (Amsterdam: Elsevier Science), 313–334. doi: 10.1016/B978-044451020-4/50018-9

CrossRef Full Text | Google Scholar

Imbir, K. K. (2016a). Affective norms for 4900 polish words reload (ANPW_R): assessments for valence, arousal, dominance, origin, significance, concreteness, imageability and, age of acquisition. Front. Psychol. 7:1081. doi: 10.3389/fpsyg.2016.01081

CrossRef Full Text | Google Scholar

Imbir, K. K. (2016b). Affective norms for 718 polish short texts (ANPST): dataset with affective ratings for valence, arousal, dominance, origin, subjective significance and source dimensions. Front. Psychol. 7:1030. doi: 10.3389/fpsyg.2016.01030

CrossRef Full Text | Google Scholar

Ishii, R., Gojmerac, C., Stuss, D. T., Gallup, J. G., Alexander, M. P., Chau, W., et al. (2004). MEG analysis of “theory of mind” in emotional vignettes comprehension. Neurol. Clin. Neurophysiol. 28, 1–5.

Google Scholar

Jacobs, A. M., Hofmann, M. J., and Kinder, A. (2016a). On elementary affective decisions: to like or not to like, that is the question. Front. Psychol. 7:1836. doi: 10.3389/fpsyg.2016.01836

CrossRef Full Text | Google Scholar

Jacobs, A. M., Lüdtke, J., Aryani, A., Meyer-Sickendieck, B., and Conrad, M. (2016b). Mood- empathic and aesthetic responses in poetry reception. Sci. Study Literat. 6, 87–130. doi: 10.1075/ssol.6.1.06jac

CrossRef Full Text | Google Scholar

Jacobs, A. M. (2011). “Neurokognitive Poetik: elemente eines modells des literarischen lesens (neurocognitive poetics: elements of a model of literary reading,” in Gehirn und Gedicht: Wie wir Unsere Wirklichkeiten Konstruieren, eds R. Schrott and A. M. Jacobs (München: Hanser), 492–520.

Google Scholar

Jacobs, A. M. (2015a). Neurocognitive poetics: methods and models for investigating the neuronal and cognitive-affective bases of literature reception. Front. Hum. Neurosci. 9:186. doi: 10.3389/fnhum.2015.00186

CrossRef Full Text | Google Scholar

Jacobs, A. M. (2015b). “Towards a neurocognitive poetics model of literary reading,” in Cognitive Neuroscience of Natural Language Use, ed. R. M. Willems (Cambridge: Cambridge University Press), 135–159.

Google Scholar

Jacobs, A. M. (2017). Quantifying the beauty of words: a neurocognitive poetics perspective. Front. Hum. Neurosci. 11:622. doi: 10.3389/fnhum.2017.00622

CrossRef Full Text | Google Scholar

Jacobs, A. M. (2018a). The gutenberg english poetry corpus: exemplary quantitative narrative analyses. Front. Digital Hum. 5:5. doi: 10.3389/fdigh.2018.00005

CrossRef Full Text | Google Scholar

Jacobs, A. M. (2018b). (Neuro-)cognitive poetics and computational stylistics. Sci. Study Literat. 8, 165–208. doi: 10.1075/ssol.18002.jac

CrossRef Full Text | Google Scholar

Jacobs, A. M. (2019). Sentiment analysis for words and fiction characters from the perspective of computational (Neuro-) poetics. Front. Robot. AI 6:53. doi: 10.3389/frobt.2019.00053

CrossRef Full Text | Google Scholar

Jacobs, A. M., and Kinder, A. (2017). “The brain is the prisoner of thought”: a machine-learning assisted quantitative narrative analysis of literary metaphors for use in neurocognitive poetics. Metaphor Symb. 32, 139–160. doi: 10.1080/10926488.2017.1338015

CrossRef Full Text | Google Scholar

Jacobs, A. M., and Kinder, A. (2018). What makes a metaphor literary? Answers from two computational studies. Metaphor Symb. 33, 85–100. doi: 10.1080/10926488.2018.1434943

CrossRef Full Text | Google Scholar

Jacobs, A. M., and Lüdtke, J. (2017). “Immersion into narrative and poetic worlds,” in Narrative Absorption, eds F. Hakemulder, M. M. Kuijpers, E. S. H. Tan, K. Balint, and M. M. Doicaru (Amsterdam: John Benjamins), 69–97.

Google Scholar

Jacobs, A. M., Schuster, S., Xue, S., and Lüdtke, J. (2017). What’s in the brain that ink may character. Sci. Study Literat. 7, 4–51. doi: 10.1075/ssol.7.1.02jac

CrossRef Full Text | Google Scholar

Jacobs, A. M., Võ, M. L.-H., Briesemeister, B. B., Conrad, M., Hofmann, M. J., Kuchinke, L., et al. (2015). 10 years of BAWLing into affective and aesthetic processes in reading: what are the echoes? Front. Psychol. 6:714. doi: 10.3389/fpsyg.2015.00714

CrossRef Full Text | Google Scholar

Jiang, Z., Li, W., Liu, Y., Luo, Y., Luu, P., and Tucker, D. M. (2014). When affective word valence meets linguistic polarity: behavioral and ERP evidence. J. Neurolinguist. 28, 19–30. doi: 10.1016/j.jneuroling.2013.11.001

CrossRef Full Text | Google Scholar

Keib, K., Espina, C., Lee, Y.-I., Wojdynski, B., Choi, D., and Bang, H. (2016). Picture perfect: How photographs influence emotion attention and selection in social media news posts. Paper Presented at the 2016 Annual Conference of the Association for Education in Journalism and Mass Communication, Minneapolis, MI.

Google Scholar

Kennedy, A., and Pynte, J. (2005). Parafoveal-on-foveal effects in normal reading. Vis. Res. 45, 153–168. doi: 10.1016/j.visres.2004.07.037

CrossRef Full Text | Google Scholar

Kensinger, E. A., and Schacter, D. L. (2006). Processing emotional pictures and words: effects of valence and arousal. Cogn. Affect. Behav. Neurosci. 6, 110–126. doi: 10.3758/CABN.6.2.110

CrossRef Full Text | Google Scholar

Kiefer, M., Schuch, S., Schenck, W., and Fiedler, K. (2006). Mood states modulate activity in semantic brain areas during emotional word encoding. Cereb. Cortex 17, 1516–1530. doi: 10.1093/cercor/bhl062

CrossRef Full Text | Google Scholar

Kissler, J., Assadollahi, R., and Herbert, C. (2006). Emotional and semantic networks in visual word processing: insights from ERP studies. Prog. Brain Res. 156, 147–183. doi: 10.1016/S0079-6123(06)56008-X

CrossRef Full Text | Google Scholar

Kissler, J., and Herbert, C. (2013). Emotion, etmnooi, or emitoon?–Faster lexical access to emotional than to neutral words during reading. Biol. Psychol. 92, 464–479. doi: 10.1016/j.biopsycho.2012.09.004

CrossRef Full Text | Google Scholar

Kissler, J., Herbert, C., Peyk, P., and Junghofer, M. (2007). Buzzwords: early cortical responses to emotional words during reading. Psychol. Sci. 18, 475–480. doi: 10.1111/j.1467-9280.2007.01924.x

CrossRef Full Text | Google Scholar

Kissler, J., Herbert, C., Winkler, I., and Junghofer, M. (2009). Emotion and attention in visual word processing—An ERP study. Biol. Psychol. 80, 75–83. doi: 10.1016/j.biopsycho.2008.03.004

CrossRef Full Text | Google Scholar

Knickerbocker, H., Johnson, R. L., and Altarriba, J. (2015). Emotion effects during reading: Influence of an emotion target word on eye movements and processing. Cogn. Emot. 29, 784–806. doi: 10.1080/02699931.2014.938023

CrossRef Full Text | Google Scholar

Koelsch, S., Jacobs, A. M., Menninghaus, W., Liebal, K., Klann-Delius, G., von Scheve, C., et al. (2015). The quartet theory of human emotions: an integrative and neurofunctional model. Phys. Life Rev. 13, 1–27. doi: 10.1016/j.plrev.2015.03.001

CrossRef Full Text | Google Scholar

Kousta, S.-T., Vinson, D. P., and Vigliocco, G. (2009). Emotion words, regardless of polarity, have a processing advantage over neutral words. Cognition 112, 473–481. doi: 10.1016/j.cognition.2009.06.007

CrossRef Full Text | Google Scholar

Kuchinke, L., Jacobs, A. M., Grubich, C., Võ, M. L.-H., Conrad, M., and Herrmann, M. (2005). Incidental effects of emotional valence in single word processing: an fMRI study. Neuroimage 28, 1022–1032. doi: 10.1016/j.neuroimage.2005.06.050

CrossRef Full Text | Google Scholar

Kuijpers, M. M., Hakemulder, F., Tan, E. S., and Doicaru, M. M. (2014). Exploring absorbing reading experiences: developing and validating a self-report scale to measure story world absorption. Sci. Study Literat. 4, 89–122. doi: 10.1075/ssol.4.1.05kui

CrossRef Full Text | Google Scholar

Kuperberg, G. R., Kreher, D. A., Swain, A., Goff, D. C., and Holt, D. J. (2011). Selective emotional processing deficits to social vignettes in schizophrenia: an ERP study. Schizophrenia Bull. 37, 148–163. doi: 10.1093/schbul/sbp018

CrossRef Full Text | Google Scholar

Kuperman, V., Dambacher, M., Nuthmann, A., and Kliegl, R. (2010). The effect of word position on eye-movements in sentence and paragraph reading. Q. J. Exp. Psychol. 63, 1838–1857. doi: 10.1080/17470211003602412

CrossRef Full Text | Google Scholar

Kuperman, V., and Van Dyke, J. A. (2011). Effects of individual differences in verbal skills on eye- movement patterns during sentence reading. J. Mem. Lang. 65, 42–73. doi: 10.1016/j.jml.2011.03.002

CrossRef Full Text | Google Scholar

Lanatà, A., Valenza, G., and Scilingo, E. P. (2013). Eye gaze patterns in emotional pictures. J. Amb. Intellig. Hum. Comput. 4, 705–715. doi: 10.1007/s12652-012-0147-6

CrossRef Full Text | Google Scholar

Lang, P. J. (1980). “Behavioral treatment and bio-behavioral assessment: computer applications,” in Technology in Mental Health Care Delivery Systems, eds Sidowski, J. B., Johnson, J. H., and Williams, T. A.(Norwood, NJ: Ablex), 119–137.

Google Scholar

Lang, P. J., Bradley, M. M., and Cuthbert, B. N. (2008). International Affective Picture System (Iaps): Affective Ratings Of Pictures And Instruction Manual (Technical Report A-8). Gainesville: University of Florida.

Google Scholar

Larsen, J. T., Norris, C. J., and Cacioppo, J. T. (2003). Effects of positive and negative affect on electromyographic activity over zygomaticus major and Corrugator supercilii. Psychophysiology 40, 776–785. doi: 10.1111/1469-8986.00078

CrossRef Full Text | Google Scholar

Larsen, R. J., Mercer, K. A., Balota, D. A., and Strube, M. J. (2008). Not all negative words slow down lexical decision and naming speed: importance of word arousal. Emotion 8, 445–452. doi: 10.1037/1528-3542.8.4.445

CrossRef Full Text | Google Scholar

Lebrecht, S., Bar, M., Barrett, L. F., and Tarr, M. J. (2012). Micro-valences: perceiving affective valence in everyday objects. Front. Psychol. 3:107. doi: 10.3389/fpsyg.2012.00107

CrossRef Full Text | Google Scholar

Lehne, M., Engel, P., Rohrmeier, M., Menninghaus, W., Jacobs, A. M., and Koelsch, S. (2015). Reading a suspenseful literary text activates brain areas related to social cognition and predictive inference. PLoS One 10:e0124550. doi: 10.1371/journal.pone.0124550

CrossRef Full Text | Google Scholar

Leiner, D. J. (2019). SoSci Survey (Version 3.1.06) [Computer Software]. München: SoSci Survey GmbH.

Google Scholar

León, J. A., Dávalos, M. T., Escudero, I., Olmos, R., Morera, Y., and Froufé Torres, M. (2015). Effects of valence and causal direction in the emotion inferences processing during reading: evidence from a lexical decision task. Anales Psycol. 31, 677–686. doi: 10.6018/analesps.31.2.167391

CrossRef Full Text | Google Scholar

Levine, D., Marziali, E., and Hood, J. (1997). Emotion processing in borderline personality disorders. J. Nerv. Ment. Dis. 185, 240–246. doi: 10.1080/01612840490486692

CrossRef Full Text | Google Scholar

Liversedge, S., Gilchrist, I., and Everling, S. (2011). The Oxford Handbook of Eye Movements. Oxford: Oxford University Press.

Google Scholar

Lüdtke, J., Froehlich, E., Jacobs, A. M., and Hutzler, F. (2019). The SLS-Berlin: validation of a German computer-based screening test to measure reading proficiency in early and late adulthood. Front. Psychol. 10:1682. doi: 10.3389/fpsyg.2019.01682

CrossRef Full Text | Google Scholar

Lüdtke, J., and Jacobs, A. M. (2015). The emotion potential of simple sentences: additive or interactive effects of nouns and adjectives? Front. Psychol. 6:1137. doi: 10.3389/fpsyg.2015.01137

CrossRef Full Text | Google Scholar

Lüdtke, J., Meyer-Sickendieck, B., and Jacobs, A. M. (2014). Immersing in the stillness of an early morning: testing the mood empathy hypothesis of poetry reception. Psychol. Aesthet. Creat. Arts 8, 363–377. doi: 10.1037/a0036826

CrossRef Full Text | Google Scholar

Manning, C. D. (2011). “Part-of-speech tagging from 97% to 100%: is it time for some linguistics?,” in Computational Linguistics and Intelligent Text Processing, ed. A. F. Gelbukh (Berlin: Springer), 171–189. doi: 10.1007/978-3-642-19400-9_14

CrossRef Full Text | Google Scholar

Mar, R. A., Oatley, K., Djikic, M., and Mullin, J. (2011). Emotion and narrative fiction: interactive influences before, during, and after reading. Cognit. Emot. 25, 818–833. doi: 10.1080/02699931.2010.515151

CrossRef Full Text | Google Scholar

Marchewka, A., Żurawski, L., Jednoróg, K., and Grabowska, A. (2014). The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database. Behav. Res. Methods 46, 596–610. doi: 10.3758/s13428-013-0379-1

CrossRef Full Text | Google Scholar

Megalakaki, O., Ballenghein, U., and Baccino, T. (2019). Effects of valence and emotional intensity on the comprehension and memorization of texts. Front. Psychol. 10:179. doi: 10.3389/fpsyg.2019.00179

CrossRef Full Text | Google Scholar

Miall, D. S. (1989). Beyond the schema given: affective comprehension of literary narratives. Cogn. Emot. 3, 55–78. doi: 10.1080/02699938908415236

CrossRef Full Text | Google Scholar

Miall, D. S., and Kuiken, D. (2001). “Shifting perspectives: readers’ feelings and literary response,” in New Perspectives on Narrative Perspective, ed. W. van Peer (Albany: State University of New York Press), 289–301.

Google Scholar

Montefinese, M., Ambrosini, E., Fairfield, B., and Mammarella, N. (2014). The adaptation of the affective norms for English words (ANEW) for Italian. Behav. Res. Methods 46, 887–903. doi: 10.3758/s13428-013-0405-3

CrossRef Full Text | Google Scholar

Moors, A., De Houwer, J., Hermans, D., Wanmaker, S., Van Schie, K., Van Harmelen, A.-L., et al. (2013). Norms of valence, arousal, dominance, and age of acquisition for 4,300 Dutch words. Behav. Res. Methods 45, 169–177. doi: 10.3758/s13428-012-0243-8

CrossRef Full Text | Google Scholar

Mouw, J. M., Leijenhorst, L. V., Saab, N., Danel, M. S., and van den Broek, P. (2019). Contributions of emotion understanding to narrative comprehension in children and adults. Eur. J. Dev. Psychol. 16, 66–81. doi: 10.1080/17405629.2017.1334548

CrossRef Full Text | Google Scholar

Nakic, M., Smith, B. W., Busis, S., Vythilingam, M., and Blair, R. J. R. (2006). The impact of affect and frequency on lexical decision: the role of the amygdala and inferior frontal cortex. Neuroimage 31, 1752–1761. doi: 10.1016/j.neuroimage.2006.02.022

CrossRef Full Text | Google Scholar

Niu, Y., Todd, R., and Anderson, A. K. (2012). Affective salience can reverse the effects of stimulus- driven salience on eye movements in complex scenes. Front. Psychol. 3:336. doi: 10.3389/fpsyg.2012.00336

CrossRef Full Text | Google Scholar

Nummenmaa, L., Hyönä, J., and Calvo, M. G. (2006). Eye movement assessment of selective attentional capture by emotional pictures. Emotion 6, 257–268. doi: 10.1037/1528-3542.6.2.257

CrossRef Full Text | Google Scholar

Olofsson, J. K., Nordin, S., Sequeira, H., and Polich, J. (2008). Affective picture processing: an integrative review of ERP findings. Biol. Psychol. 77, 247–265. doi: 10.1016/j.biopsycho.2007.11.006

CrossRef Full Text | Google Scholar

Pegna, A. J., Khateb, A., Michel, C. M., and Landis, T. (2004). Visual recognition of faces, objects, and words using degraded stimuli: where and when it occurs. Hum. Brain Mapp. 22, 300–311. doi: 10.1002/hbm.20039

CrossRef Full Text | Google Scholar

Pilarczyk, J., and Kuniecki, M. (2014). Emotional content of an image attracts attention more than visually salient features in various signal-to-noise ratio conditions. J. Vis. 14, 1–19. doi: 10.1167/14.12.4

CrossRef Full Text | Google Scholar

Pinheiro, A. P., Dias, M., Pedrosa, J., and Soares, A. P. (2017). Minho Affective Sentences (MAS): probing the roles of sex, mood, and empathy in affective ratings of verbal stimuli. Behav. Res. Methods 49, 698–716. doi: 10.3758/s13428-016-0726-0

CrossRef Full Text | Google Scholar

Poulou, M. (2001). The role of vignettes in the research of emotional and behavioural difficulties. Emot. Behav. Diffic. 6, 50–62. doi: 10.1080/13632750100507655

CrossRef Full Text | Google Scholar

R Core Team (2019). R: A Language And Environment For Statistical Computing. Vienna: R Foundation for Statistical Computing.

Google Scholar

Radach, R., Huestegge, L., and Reilly, R. (2008). The role of global top-down factors in local eye- movement control in reading. Psychol. Res. 72, 675–688. doi: 10.1007/s00426-008-0173-3

CrossRef Full Text | Google Scholar

Radach, R., and Kennedy, A. (2013). Eye movements in reading: some theoretical context. Q. J. Exp. Psychol. 66, 429–452. doi: 10.1080/17470218.2012.750676

CrossRef Full Text | Google Scholar

Raney, G. E., Therriault, D. J., and Minkoff, S. R. B. (2000). Repetition effects from paraphrased text: evidence for an integrated representation model of text representation. Discourse Process. 29, 61–81. doi: 10.1207/S15326950dp2901_4

CrossRef Full Text | Google Scholar

Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 124, 372–422. doi: 10.1037/0033-2909.124.3.372

CrossRef Full Text | Google Scholar

Rayner, K. (2009). Eye movements and attention in reading, scene perception, and visual search. Q. J. Exp. Psychol. 62, 1457–1506. doi: 10.1080/17470210902816461

CrossRef Full Text | Google Scholar

Rayner, K., Chace, K. H., Slattery, T. J., and Ashby, J. (2006). Eye movements as reflections of comprehension processes in reading. Sci. Stud. Read. 10, 241–255. doi: 10.1207/s1532799xssr1003_3

CrossRef Full Text | Google Scholar

Rayner, K., Slattery, T. J., and Bélanger, N. N. (2010). Eye movements, the perceptual span, and reading speed. Psychon. Bull. Rev. 17, 834–839. doi: 10.3758/PBR.17.6.834

CrossRef Full Text | Google Scholar

Recio, G., Conrad, M., Hansen, L. B., and Jacobs, A. M. (2014). On pleasure and thrill: the interplay between arousal and valence during visual word recognition. Brain Lang. 134, 34–43. doi: 10.1016/j.bandl.2014.03.009

CrossRef Full Text | Google Scholar

Redondo, J., Fraga, I., Padrón, I., and Comesaña, M. (2007). The Spanish adaptation of ANEW (affective norms for English words). Behav. Res. Methods 39, 600–605. doi: 10.3758/BF03193031

CrossRef Full Text | Google Scholar

Reichenbach, L., and Masters, J. C. (1983). Children’s use of expressive and contextual cues in judgments of emotion. Child Dev. 54, 993–1004. doi: 10.2307/1129903

CrossRef Full Text | Google Scholar

Rellecke, J., Palazova, M., Sommer, W., and Schacht, A. (2011). On the automaticity of emotion processing in words and faces: event-related brain potentials evidence from a superficial task. Brain Cogn. 77, 23–32. doi: 10.1016/j.bandc.2011.07.001

CrossRef Full Text | Google Scholar

Ribordy, S. C., Camras, L. A., Stefani, R., and Spaccarelli, S. (1988). vignettes for emotion recognition research and affective therapy with children. J. Clin. Child Psychol. 17, 322–325. doi: 10.1207/s15374424jccp1704_4

CrossRef Full Text | Google Scholar

Riegel, M., Wierzba, M., Wypych, M., Żurawski, Ł, Jednoróg, K., Grabowska, A., et al. (2015). Nencki affective word list (NAWL): the cultural adaptation of the Berlin affective word list–reloaded (BAWL-R) for Polish. Behav. Res. Methods 47, 1222–1236. doi: 10.3758/s13428-014-0552-1

CrossRef Full Text | Google Scholar

Robinson, M. D., and Clore, G. L. (2001). Simulation, scenarios, and emotional appraisal: testing the convergence of real and imagined reactions to emotional stimuli. Pers. Soc. Psychol. Bull. 27, 1520–1532. doi: 10.1177/01461672012711012

CrossRef Full Text | Google Scholar

Rubo, M., and Gamer, M. (2018). Social content and emotional valence modulate gaze fixations in dynamic scenes. Sci. Rep,. 8, 1–11. doi: 10.1038/s41598-018-22127-w

CrossRef Full Text | Google Scholar

Schacht, A., and Sommer, W. (2009a). Emotions in word and face processing: early and late cortical responses. Brain Cogn. 69, 538–550. doi: 10.1016/j.bandc.2008.11.005

CrossRef Full Text | Google Scholar

Schacht, A., and Sommer, W. (2009b). Time course and task dependence of emotion effects in word processing. Cogn. Affect. Behav. Neurosci. 9, 28–43. doi: 10.3758/CABN.9.1.28

CrossRef Full Text | Google Scholar

Schlochtermeier, L. H., Kuchinke, L., Pehrs, C., Urton, K., Kappelhoff, H., and Jacobs, A. M. (2013). Emotional picture and word processing: an FMRI study on effects of stimulus complexity. PLoS One 8:e55619. doi: 10.1371/journal.pone.0055619

CrossRef Full Text | Google Scholar

Schmidtke, D. S., Schröder, T., Jacobs, A. M., and Conrad, M. (2014). ANGST: affective norms for German sentiment terms, derived from the affective norms for English words. Behav. Res. Methods 46, 1108–1118. doi: 10.3758/s13428-013-0426-y

CrossRef Full Text | Google Scholar

Schupp, H. T., Cuthbert, B. N., Bradley, M. M., Birbaumer, N., and Lang, P. J. (1997). Probe P3 and blinks: two measures of affective startle modulation. Psychophysiology 34, 1–6. doi: 10.1111/j.1469-8986.1997.tb02409.x

CrossRef Full Text | Google Scholar

Scott, G. G., O’Donnell, P. J., and Sereno, S. C. (2012). Emotion words affect eye fixations during reading. J. Exp. Psychol. 38, 783–792. doi: 10.1037/a0027209

CrossRef Full Text | Google Scholar

Scrimin, S., and Mason, L. (2015). Does mood influence text processing and comprehension? Evidence from an eye-movement study. Br. J. Educ. Psychol. 85, 387–406. doi: 10.1111/bjep.12080

CrossRef Full Text | Google Scholar

Segalowitz, S. J., and Lane, K. C. (2000). Lexical access of function versus content words. Brain Lang. 75, 376–389. doi: 10.1006/brln.2000.2361

CrossRef Full Text | Google Scholar

Seifert, L. S. (1997). Activating representations in permanent memory: different benefits for pictures and words. J. Exp. Psychol. 23, 1106–1121. doi: 10.1037/0278-7393.23.5.1106

CrossRef Full Text | Google Scholar

Siedlecka, E., and Denson, T. F. (2019). Experimental methods for inducing basic emotions: a qualitative review. Emot. Rev. 11, 87–97. doi: 10.1177/1754073917749016

CrossRef Full Text | Google Scholar

Simola, J., Torniainen, J., Moisala, M., Kivikangas, M., and Krause, C. M. (2013). Eye movement related brain responses to emotional scenes during free viewing. Front. Syst. Neurosci. 7:41. doi: 10.3389/fnsys.2013.00041

CrossRef Full Text | Google Scholar

Soares, A. P., Comesaña, M., Pinheiro, A. P., Simões, A., and Frade, C. S. (2012). The adaptation of the affective norms for english words (ANEW) for european portuguese. Behav. Res. Methods 44, 256–269. doi: 10.3758/s13428-011-0131-7

CrossRef Full Text | Google Scholar

Soares, A. P., Pinheiro, A. P., Costa, A., Frade, C. S., Comesaña, M., and Pureza, R. (2015). Adaptation of the international affective picture system (IAPS) for European Portuguese. Behav. Res. Methods 47, 1159–1177. doi: 10.3758/s13428-014-0535-2

CrossRef Full Text | Google Scholar

Söderholm, C., Häyry, E., Laine, M., and Karrasch, M. (2013). Valence and arousal ratings for 420 Finnish nouns by age and gender. PLoS One 8:e72859. doi: 10.1371/journal.pone.0072859

CrossRef Full Text | Google Scholar

Søvik, N., Arntzen, O., and Samuelstuen, M. (2000). Eye-movement parameters and reading speed. Read. Writ. 13, 237–255. doi: 10.1023/A:1026495716953

CrossRef Full Text | Google Scholar

Speer, N. K., Reynolds, J. R., Swallow, K. M., and Zacks, J. M. (2009). Reading stories activates neural representations of visual and motor experiences. Psychol. Sci. 20, 989–999. doi: 10.1111/j.1467-9280.2009.02397.x

CrossRef Full Text | Google Scholar

Suk, H.-J. (2006). Color and Emotion – a Study on the Affective Judgment Across Media and in Relation to Visual Stimuli. Ph.D. thesis, University of Mannheim, Mannheim.

Google Scholar

Tempel, K., Kuchinke, L., Urton, K., Schlochtermeier, L. H., Kappelhoff, H., and Jacobs, A. M. (2013). Effects of positive pictograms and words: an emotional word superiority effect? J. Neurolinguist. 26, 637–648. doi: 10.1016/j.jneuroling.2013.05.002

CrossRef Full Text | Google Scholar

Venables, W. N., and Ripley, B. D. (2002). Modern Applied Statistics with S, 4th Edn, New York, NY: Springer.

Google Scholar

Verschuere, B., Crombez, G., and Koster, E. (2001). The international affective picture system. Psycholo. Belgica 41, 205–217.

Google Scholar

Vijayakumar, R., and Cheung, M. W.-L. (2018). Replicability of machine learning models in the social sciences: a case study in variable selection. Zeitschrift Psychol. 226, 259–273. doi: 10.1027/2151-2604/a000344

CrossRef Full Text | Google Scholar

Võ, M. L., Conrad, M., Kuchinke, L., Urton, K., Hofmann, M. J., and Jacobs, A. M. (2009). The Berlin affective word list reloaded (BAWL-R). Behav. Res. Methods 41, 534–538. doi: 10.3758/BRM.41.2.534

CrossRef Full Text | Google Scholar

Võ, M. L., Jacobs, A. M., and Conrad, M. (2006). Cross-validating the Berlin affective word list. Behav. Res. Methods 38, 606–609. doi: 10.3758/BF03193892

CrossRef Full Text | Google Scholar

Wadlinger, H. A., and Isaacowitz, D. M. (2006). Positive mood broadens visual attention to positive stimuli. Motiv. Emot. 30, 87–99. doi: 10.1007/s11031-006-9021-1

CrossRef Full Text | Google Scholar

Wallentin, M., Nielsen, A. H., Vuust, P., Dohn, A., Roepstorff, A., and Lund, T. E. (2011). Amygdala and heart rate variability responses from listening to emotionally intense parts of a story. Neuroimage 58, 963–973. doi: 10.1016/j.neuroimage.2011.06.077

CrossRef Full Text | Google Scholar

Wallot, S., Hollis, G., and Rooij, M. (2013). Connected text reading and differences in text reading fluency in adult readers. PLoS One 8:e71914. doi: 10.1371/journal.pone.0071914

CrossRef Full Text | Google Scholar

Warnes, G. R., Bolker, B., and Lumley, T. (2018). Gtools: Various R Programming Tools. R Package Version 3.8.1. Available online at: https://CRAN.R-project.org/package=gtools (accessed April 6, 2020).

Google Scholar

Warriner, A. B., Kuperman, V., and Brysbaert, M. (2013). Norms of valence, arousal, and dominance for 13,915 English lemmas. Behav. Res. Methods 45, 1191–1207. doi: 10.3758/s13428-012-0314-x

CrossRef Full Text | Google Scholar

Westermann, R., Spies, K., Stahl, G., and Hesse, F. W. (1996). Relative effectiveness and validity of mood induction procedures: a meta-analysis. Eur. J. Soc. Psychol. 26, 557–580. doi: 10.1002/(SICI)1099-0992(199607)26:4<557::AID-EJSP769<3.0.CO;2-4

CrossRef Full Text | Google Scholar

Whissell, C. (2003). Readers’ opinions of romantic poetry are consistent with emotional measures based on the Dictionary of Affect in Language. Percept. Mot. Skills 96, 990–992. doi: 10.2466/pms.2003.96.3.990

CrossRef Full Text | Google Scholar

Whissell, C. M., and Dewson, M. R. (1986). A dictionary of affect in language: III. Analysis of two biblical and two secular passages. Percept. Mot. Skills 62, 127–132. doi: 10.2466/pms.1986.62.1.127

CrossRef Full Text | Google Scholar

Wilson-Mendenhall, C. D., Barrett, L. F., and Barsalou, L. W. (2013). Neural evidence that human emotions share core affective properties. Psychol. Sci. 24, 947–956. doi: 10.1177/0956797612464242

CrossRef Full Text | Google Scholar

Xue, S., Lüdtke, J., Sylvester, T., and Jacobs, A. M. (2019). Reading shakespeare sonnets: combining quantitative narrative analysis and predictive modeling – an eye tracking study. J. Eye Mov. Res. 12:2. doi: 10.16910/jemr.12.5.2

CrossRef Full Text | Google Scholar

Yang, J., Wang, A., Yan, M., Zhu, Z., Chen, C., and Wang, Y. (2012). Distinct processing for pictures of animals and objects: evidence from eye movements. Emotion 12, 540–551. doi: 10.1037/a0026848

CrossRef Full Text | Google Scholar

Yarkoni, T., and Westfall, J. (2017). Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12, 1100–1122. doi: 10.1177/1745691617693393

CrossRef Full Text | Google Scholar

Yiend, J., and Mathews, A. (2001). Anxiety and attention to threatening pictures. Q. J. Exp. Psychol. Sect. A 54, 665–681. doi: 10.1080/713755991

CrossRef Full Text | Google Scholar

Keywords: reading, vignettes, pictures, emotion induction, ratings, valence, eye movements

Citation: Usée F, Jacobs AM and Lüdtke J (2020) From Abstract Symbols to Emotional (In-)Sights: An Eye Tracking Study on the Effects of Emotional Vignettes and Pictures. Front. Psychol. 11:905. doi: 10.3389/fpsyg.2020.00905

Received: 09 October 2019; Accepted: 14 April 2020;
Published: 26 May 2020.

Edited by:

Sidarta Ribeiro, Federal University of Rio Grande do Norte, Brazil

Reviewed by:

Pilar Ferré Romeu, Rovira i Virgili University, Spain
Jazmín Cevasco, University of Buenos Aires, Argentina

Copyright © 2020 Usée, Jacobs and Lüdtke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Franziska Usée, franziska.usee@fu-berlin.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.