ORIGINAL RESEARCH article
Sec. Emotion Science
Volume 9 - 2018 | https://doi.org/10.3389/fpsyg.2018.02168
Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation
- 1Department of General Psychology, University of Padua, Padua, Italy
- 2Department of Psychology, University of Bath, Bath, United Kingdom
Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer’s expressiveness, and drummer’s style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.
Music can be found in different forms in all human cultures, and it is recognised as an important means of emotion communication. Research has established that musicians can successfully communicate emotions to an audience (Gabrielsson and Lindström, 1995; Gabrielsson and Juslin, 1996; Balkwill and Thompson, 1999; Juslin and Madison, 1999), however, the emotions perceived and/or felt by the listener depend on various individual factors (e.g., personality; Juslin et al., 2008).
Despite the clear effect of music practise on the listener/observer cognitive processes (e.g., Lee and Noppeney, 2011; Petrini et al., 2011) and on the ability to recognise emotions from speech prosody (e.g., Thompson et al., 2004; Strait et al., 2009; Lima and Castro, 2011b; Pinheiro et al., 2015), it is still unclear how musical practise affects perceived emotions and expressiveness from music. The present study contributes to the emerging literature regarding the relation between musical expertise and expressiveness/emotion perception from music (e.g., Bhatara et al., 2011; Lima and Castro, 2011a; Castro and Lima, 2014). This knowledge will increase understanding of the effects of music practise on emotional, cognitive, and perceptual processes and assess whether music can be used as an efficient and cost effective treatment for individuals with socio-emotional disorders (e.g., autistic and schizophrenic individuals).
To this end we examined whether individual differences in musical ability influence how emotions from music are perceived.
Expression, Perception, and Induction of Emotions Through Music
Musicians are able to communicate specific emotions via their expressive performances while listeners generally perceive the intended emotions (Laukka and Gabrielsson, 2000). Despite the consistent level of agreement found among listeners when categorising the expressed or perceived emotion (Campbell, 1942; Juslin, 1997a), listeners’ agreement seems greater for some emotions (e.g., happiness, sadness) than others (e.g., jealousy; Juslin and Laukka, 2004). The debate about which type of emotions are induced and/or perceived through music is still ongoing. Some researchers argue that music can induce only broad positive and negative states (Clark, 1983), whereas others argue that music can induce a range of both basic and complex emotions (Gabrielsson, 2001; Juslin and Västfjäll, 2008). Some researchers argue that music can induce basic emotions (e.g., happiness, sadness, anger, fear, disgust, and surprise; Krumhansl, 1997), whereas others affirm that music can communicate only a limited set of states specific to music (e.g., amazement and peacefulness) but not common everyday emotions (e.g., shame and jealousy; Scherer, 2003; Zentner et al., 2008). A meta-analysis of 41 studies has shown that professional musicians are able to communicate efficiently five basic emotions (happiness, anger, sadness, fear, tenderness) to listeners (Juslin and Laukka, 2003). In contrast, they are not able to communicate complex emotions such as contentment, curiosity, and jealousy (Juslin and Lindström, 2003).
The ability to communicate emotions through music mainly depends on its similarity to other forms of non-verbal communication and the type of emotions that can be expressed through those channels (Clynes, 1977; Juslin, 1997b); for instance, basic emotions that can be accurately perceived through music seem to mirror those that can be perceived through speech (Juslin and Laukka, 2003), as music and speech use similar neurological mechanisms to convey emotions (Escoffier et al., 2013; Peretz et al., 2015; Paquette et al., 2018). However, some negative emotions (e.g., guilt, shame, jealousy, disgust, contempt, embarrassment, anger, and fear) are not commonly experienced through music (Zentner et al., 2008), because even sad or nostalgic music usually induces positive emotions (Juslin and Laukka, 2003; Laukka, 2007). Better yet, sadness as communicated through music can be usually perceived, i.e., sad music is perceived as sad, but does not usually induce sadness in the listeners (Zentner et al., 2000), in contrast, usually listening to sad music often evokes positive emotions (e.g., listening to a sad romantic song; Juslin and Sloboda, 2010).
Consequently, we asked whether music practise would affect the perception of basic emotions as well as the perceived valence and level of expressivity to reflect the complexity of the emotion communicated by the musician. Also because our aim was to examine whether music practise can affect how we perceive emotions communicated by others through music, we examined whether music practise affects the perception, rather than the feeling, of emotions from music performance.
The Role of Individual Factors
To reach a good understanding of how listeners perceive emotions from music, it is also necessary to investigate contingent (Sloboda and Juslin, 2001) and individual factors (Abeles and Chung, 1996). Nevertheless, only a few studies have focused on the role of individual differences in the perception of emotions from music. For instance, being amusic or using cochlear implants affects the emotion judgements from music (Ambert-Dahan et al., 2015; Gosselin et al., 2015). Moreover, Lima and Castro (2011a) and Castro and Lima (2014) found that the number of the years of musical training is associated with more accurate recognition of the musical emotion. Yet little is known about whether the individual differences in musical training influence the perception of emotions from music and if so how. It is recognised that musical training and expertise affect the multisensory perception of music (e.g., Petrini et al., 2009a,b, 2010a, 2011; Lee and Noppeney, 2014). For example, trained musicians (e.g., drummers and pianists) and non-musicians differ in their sensitivity to desynchronisation between the movement of a musician and the resulting sound (e.g., Lee and Noppeney, 2011; Petrini et al., 2011). However, despite the consistent findings showing a strong effect of long-term music and dance training on several cognitive and perceptual abilities (e.g., Scherer, 2003; Petrini et al., 2009a,b, 2010a; Lee and Noppeney, 2014) and their subtending neural mechanisms (e.g., Calvo-Merino et al., 2005; Petrini et al., 2011; Lee and Noppeney, 2014; Lu et al., 2014), it is still unclear whether the effect of musical training extends to the perception of emotions from music.
Here we investigate how different types of musical training and the absence of such training affect how emotions from musical performance are perceived.
How Musicians Communicate Emotions From Music
Sound plays a dominant role in the communication of emotions through music (Vines et al., 2006). The various expressive intentions of musicians implicate a change in the acoustic cues (e.g., dynamic, timing, tempo, mode, pitch, harmony, loudness; Laukka and Gabrielsson, 2000; Juslin, 2001; Gabrielsson and Juslin, 2003; Gagnon and Peretz, 2003; Juslin and Laukka, 2003; Juslin and Lindström, 2003). For instance, performers communicate happiness by using major mode and sadness by using minor mode (Hevner, 1935; Gerardi and Gerken, 1995; Peretz et al., 1998; Gagnon and Peretz, 2003); note that, in the theory of Western music, mode refers to a sequence of tones and semitones arranged according to a specific order. Among all acoustic features, tempo seems to be one of the most important variables used to determine emotions and expression in music (Hevner, 1937; Rigg, 1964; Gagnon and Peretz, 2003) from early childhood (Dalla Bella et al., 2001). For example, slow melodies communicate sadness whereas fast melodies communicate happiness, fear, and anger (Juslin, 2001; Gabrielsson and Juslin, 2003; Gagnon and Peretz, 2003; Juslin and Laukka, 2003; Juslin and Lindström, 2003; Juslin, 2009). Moreover, musicians might communicate their emotion intentions also through musical genre, as, for instance, heavy metal music helps its fans to regulate sadness, to reduce anger and to enhance positive emotions (Sharman and Dingle, 2015).
In addition to sound, when a musical performance is also transmitted visually (such as when listeners watch a live music performance), music becomes a richer emotional experience. Besides the acoustic aspects of the music performance, facial expressions, and gestures of musicians influence the emotions perceived by listeners (Davidson, 1993; Thompson et al., 2008). For example, it has been shown that marimbists, saxophonists, and bassoonists were able to communicate specific emotions through body movements alone (Dahl and Friberg, 2007). More specifically, musicians may communicate different emotions depending on the movements they use when playing a musical instrument. For instance, the authors (Dahl and Friberg, 2007) showed that large and fast movements communicated happiness whereas small and slow movements communicated negative emotions.
Similarly, it has been shown that when the sound and video of a clarinettist performance were presented together to a group of musicians, the visual information supported, modified, or confirmed the emotional content perceived through the sound (Vines et al., 2006). However, in a different study, this effect of visual information over sound did not extend to non-musicians, who relied more on the sound when perceiving the emotional content from a drummer’s and saxophonist’s performance (Petrini et al., 2010b). These separate studies suggest that the weight given by musicians to the emotional visual information may be greater than that given by non-musicians. To examine this possibility, here we investigate how musicians and non-musicians perceive emotion from music when solely listening, solely observing, or both listening and observing such performances. Also because it is still unclear whether long-term musical training affects how sound and body features are used to perceive emotions we tested how changes in tempo, musical genre, musician’s expressiveness, musician’s style, and level of complexity of musical pieces affected the perceived emotions.
Melodic vs. Rhythmic Music
It is understood that melody modulates the perception of emotions from music (e.g., Gagnon and Peretz, 2003). Conversely it is less well understood how, rhythm, the other component of music impacts emotional perception despite findings that have shown that some emotions expressed through drums can be recognised with high accuracy by non-musicians (Petrini et al., 2010b). The drums and percussions allow more evident and less restricted upper body movements than those permitted by other instruments (Petrini et al., 2010b) and are easily learned and reproduced from an early age with tangible consequences on perceptual and cognitive abilities (Gerson et al., 2015). For example, it was shown that preverbal infants engage in rhythmic behaviour more often for music rather than speech and this engagement to music is associated to the level of positive behaviour (smiles) by the infant (Zentner and Eerola, 2010). Similarly, it was shown that practising with the drums for only 5 min increases six-month-old infants’ ability to detect auditory and visual desynchronisation (Gerson et al., 2015). Hence, playing a rhythmic instrument could be an effective means of therapy from an early age if pure rhythmic instruments can communicate emotional states. For these reasons, we chose to focus on drumming because it is a purely rhythmic instrument and the associated rhythmic and timing skills have been shown to have positive effects on the language skills of children with dyslexia (Overy, 2003). Similarly, that interventions using the drums have been shown to facilitate speech in non-verbal children with autism (Wan et al., 2011).
The Present Study
The main aim of the present study was to examine whether musical training affects the way emotions from drumming are perceived. As it is an exploratory study, specific hypotheses were not formulated, however, we asked whether: (1) musicians would perceive emotion from rhythmic music differently from non-musicians; (2) whether the level of familiarity with the instrument used in the music performance would affect the way emotions are perceived; (3) whether musicians would be affected more than non-musicians by changes in certain characteristics of the music performance, such as musical genre, tempo, complexity, drummer’s expressiveness, drummer’s style, and sensory modality (i.e., whether musicians would weigh the visual information from the music performance more than non-musicians).
To this end, we asked participants with different levels and types of musical expertise (drummers, other musicians with no drumming experience, and non-musicians) and familiarity with the musical stimulus (as the performance was drumming improvisations) to rate the level of expressiveness of music clips under different sensory conditions (audio-only, video-only, audio with video), judge the positivity (or negativity) of the perceived emotion, and categorise the perceived emotion among a group of basic emotions.
Materials and Methods
The study involved 48 voluntary adults recruited through social media. The number of participants is similar or higher than previous studies investigating the effect of long-term music training on cognitive and perceptual abilities (e.g., Petrini et al., 2010a, 2011; Bhatara et al., 2011; Lee and Noppeney, 2014; Lu et al., 2014). These studies generally reported medium to very large effect sizes on behavioural data (e.g., accuracy or response times in the detection of audiovisual asynchronies). Although in most cases they are not reported explicitly, they can be derived from figures or descriptive data, and seem to indicate differences between musicians and non-musicians, or equivalent effects of music training on sensitivity to emotions in music, of about Cohen’s d ≈ 0.50–1.50 (e.g., Bhatara et al., 2011; Lima and Castro, 2011a; Castro and Lima, 2014; Lee and Noppeney, 2014; Lu et al., 2014) or about 1.50–2.00 (Petrini et al., 2010a; see their Figure 5, and refer to the binomial distribution). Expecting a Cohen’s d of 1.00 in a between-group comparison, power of 0.80, and a significance level of 0.05, the estimated N should be of 16–17 (as calculated, for example, using the R software’s pwr library).
Sixteen participants were non-musicians – ranging from 21 to 43 years of age and seven females (M = 29.37; SD = 6.66); sixteen were musicians with no drumming experience – ranging from 21 to 43 years of age and eight females (M = 27.81; SD = 5.20); sixteen were drummers – ranging from 20 to 44 years of age and eight females (M = 29.5; SD = 7.72). The non-musicians had never played a musical instrument except for the basic music classes of the Italian Middle School curriculum. Non-drummer musicians had played a musical instrument, except drums, for at least 4 years (years of music training: M = 14.06; SD = 7.14). Drummers had been playing only drums for at least 4 years (years of music training: M = 15.81; SD = 9.33).
All participants reported normal hearing, and normal (or corrected to normal) vision. All participants gave their written informed consent before testing began. The study received ethical approval from the Ethics Committee of the Department of Psychology, University of Padua (n° 2425).
Stimuli and Apparatus
The stimuli were audio-visual clips of the performances of a professional drummer (years of music training: 25; years of teaching: 16 – see Figure 1). We asked the drummer to improvise each recorded performance. Improvisations were chosen instead of known musical pieces to avoid any effect of familiarity and episodic memory on participants’ perceived emotions (Juslin and Västfjäll, 2008; Petrini et al., 2010b). In addition, no instructions concerning specific emotional intentions were given to the drummer, whereas the features of the performances were manipulated. This was done to compare our results with those of studies using the drums (Petrini et al., 2010b) and asking the drummers to specifically communicate some basic emotions (which reduces the complexity and the realism of the music performance). Hence, performances were different for musical genre (jazz or heavy metal), complexity (complex or simple rhythms), tempo (60 or 120 beats per minute), drummer’s expressiveness (with minimal or maximum expressive interpretation of the music), and drummer’s style (playing with open or crossed arms). Matching the levels of these factors, 32 different stimuli were obtained. Initially, we asked drummer to repeat each performance five times to have an initial measure of the drummer’s level of consistency among the repeated performances. Since the drummer was a technically accurate professional, the different clips of the same stimulus resulted in a consistent visual output (e.g., similar grooves, timing, and dynamics). However, we selected only one clip among five. First, we eliminated any clips with minor technical errors (e.g., video camera not perfectly facing the performer) and then we choose the most accurate clip in terms of drumming technique (minor details, e.g., drumsticks grip, fills). Therefore, despite recording a total number of 160 audio-visual performances, only 32 of these were selected for the experiment. Then, each audio-visual recording was also converted into an audio-only stimulus and video-only stimulus, thus reaching a total number of 96 clips. Each clip had a duration of about 40 s with the audio-only stimuli replacing the video of the drummer with a black screen and the visual-only stimuli muting the audio track. We chose not to use different types of clips for the audio-only, video-only, and audiovisual conditions to not risk a possible confound, as the various clips could differ in expressing a certain emotion. This ensured that any significant effects of the modality could be tied to differences in sensory information rather than differences in quality of the music performance.
FIGURE 1. Example of clips used in the present study. On the left, frame and waveform sample from closed arms and neutral performance; on the right, frame and waveform sample from open arms and expressive performance.
The original audio-visual clips (see example of Videos in Supplementary Material) were recorded in a professional music studio using Sonor SQ2 drums and Ufip cymbals. The video recordings were made with an iPhone 6, facing the performer. For the audio recording we used three microphones XXL (two overhead and one for drum bass), mixer/sound card (M-Audio Fast Track Ultra 8R), and Pro Tools 11 programme. The video-track and the audio-track were combined together by using Adobe Premier Pro 2.0. The audio-visual files (Mpeg 1920 × 1080) were converted into avi 1366 × 768 files using Freemake Video Converter. The stimuli were presented to participants via E-Prime 2.0 (Psychology Software Tools, Inc.) on a Sony Vaio laptop computer. In audio-visual and audio-only stimuli, the audio was delivered through Sennheiser HD 280 Pro (64 ohms) headphones.
Participants were tested individually in different rooms with testing always conducted in a quiet room of similar size. All participants were informed about the procedure before testing. Successively, the 96 clips were presented to participants, who were then asked, after each clip, (i) to rate the level of perceived expressiveness by using a 7-points Likert scale (1: little expressive – 7: very expressive), (ii) to judge on the positivity (or negativity) of the perceived emotion with a dichotomous response, and (iii) to choose the perceived emotion by selecting one from seven possible emotion categories: six basic emotion (happy, sadness, anger, fear, disgust, and surprise) plus a neutral emotion for participants who could not readily categorise the perceived emotions among the basic emotions or if they did not perceive emotion. To answer these questions, participants had to use the number pad of the laptop. Questions and stimuli were presented in random order. This randomised method, that has been used in similar musical and non-musical studies (e.g., Collignon et al., 2008; Piwek et al., 2015) to assess the benefit of multisensory perception for emotion processing, was used in order to reduce the effect of learning and fatigue, and thus reduce the possibility that any benefit found for the audio-visual clips was due to increased familiarity with the audio-only and visual-only clips. Moreover, participants could take a break after every 32 stimuli. Upon completion of the testing procedure, we gathered information relating to each participant via a questionnaire. For the purpose of the present research, we used the answers to three questions, i.e., whether they played any instruments, if they did play an instrument which instruments they played and for how long they practise with these instruments.
Because the response variables were repeated measurements (by participant and by clip) for the ratings of perceived expressiveness, perceived positivity, and perceived emotion category data were analysed using mixed-effects linear models. In particular, because the perceived positivity and the perceived emotion category consisted of dichotomous data (1: “yes/perceived,” 0: “no/not-perceived”), these variables were analysed using logistic regression models (i.e., with a logit link function). The package “lme4” (Bates et al., 2015) of the R software was used to compute the models. The graphics were obtained using the package “effects” (Fox, 2003).
Our main aim was to examine how musical practise contributed to the perception of emotion from music. The fixed effects entered in the models were: group as a between-subject factor (3 levels: non-musicians, non-drummer musicians, or drummers), and musical genre (2 levels: jazz or heavy metal), tempo (2 levels: 60 or 120 beats per minute), drummer’s expressiveness (2 levels: with minimal or maximum expressive interpretation of the music), sensory modality (3 levels: audio-only, video-only, or audio-visual clips), complexity (2 levels: complex or simple rhythms), and drummer’s style (2 levels: open or crossed arms) as within-subjects factors. Participants and clips were treated as random effects, with random intercepts, in all models.
The main effect of each group and all the two-way interactions between each group and the other factors were examined for each response variable (to examine the higher–order interactions or other main effects, the data can be found online, doi: [doi:10.6084/m9.figshare.7262180.v1]). The significance of each effect was assessed using a likelihood ratio test for nested models based on the chi-square distribution (Pinheiro and Bates, 2000). This test compares the likelihood of two models that are identical except for the fact that one excludes and the other includes a given effect. The interactions were tested by adding them, one at a time, to the model fully inclusive of all main effects. A summary of the main effect of each group and its two-way interactions with all the other factors are reported for each response variable, along with models’ parameters, in Table 1. These effects are described below, separately by response variable. The interpretation of the significant effects was based on visual inspection of the figures (reporting estimated mean values or probabilities and 95% confidence intervals), and the consideration of model parameters (Table 1) in case of ambiguity.
TABLE 1. Summary of mixed-effects models on perceived expressiveness, emotional valence, and emotion categories perceived above chance.
A preliminary analysis was conducted to ascertain a high level of consistency among participants’ responses for this variable on the 7-points Likert scale. Cronbach α = 0.88 indicated good consistency among participants in reporting perceived expressiveness.
A significant main effect of group was found. As can be seen in Figure 2, non-drummer musicians gave higher expressive judgments than non-musicians or drummers. A significant interaction between group and musical genre was also found, with non-musicians and drummers showing a higher level of perceived expressiveness in heavy metal than jazz clips, whereas non-drummer musicians showed a high or similar level of expressiveness in both types of clips (Figure 2, panel A). A significant interaction was found also between group and tempo: the perceived expressiveness was higher for “120 beats per minute” than for “60 beats per minute” performances, but this effect was stronger in drummers (Figure 2, panel B). A significant interaction between group and modality was also found: the three groups perceived the video-only modality as less expressive than the audio-only and audio-video conditions, but this difference was more prominent in non-musicians than in drummers or non-drummers’ musicians (Figure 2, panel D). Finally, a significant interaction between group and complexity was found, with complex performances perceived as more expressive than simple performances only by drummers. No other significant interactions were found. Model parameters are reported in Table 1.
FIGURE 2. Interaction between Group and (A) Music genre, (B) Tempo, (C) Expressiveness, (D) Modality, (E) Level of difficulty, (F) Drummer style, on the perceived expressiveness of the clips. Error bars represent 95% confidence intervals of the estimated means.
As all responses were of a binomial type (1: “positive”; 0: “negative”), logistic regressions (i.e., generalised linear models with a logit link function) were computed. We did not find a significant main effect of group. However, a significant interaction between group and musical genre was observed, with all three groups perceiving a more positive emotion in heavy metal than jazz clips, although this difference was less prominent in the non-drummer musicians (Figure 3, panel A). A significant interaction between group and tempo was also found: although a more positive emotion was reported for performances at “120 beats per minute” than those at “60 beats per minute,” this difference appeared stronger in drummers than in the other participants (Figure 3, panel B). Finally, a significant interaction between group and modality was found: although the video-only modality was perceived less positively than the audio-only and audio-video conditions by all groups, this was especially evident for non-musicians (Figure 3, panel D). No other significant interactions were found. Model parameters are reported in Table 1.
FIGURE 3. Interaction between Group and (A) Music genre, (B) Tempo, (C) Expressiveness, (D) Modality, (E) Level of difficulty, (F) Drummer style, on the probability to perceive a more positive (as opposed to more negative) emotion. Error bars represent 95% confidence intervals of the estimated probabilities.
We asked participants to choose the perceived emotion by selecting one of seven emotion categories (neutral, happy, sadness, anger, fear, disgust, and surprise). The most chosen emotion was neutral (37.48%), followed by happiness (20.40%), sadness (12.46%), surprise (10.63%), anger (9.51%), disgust (5.51%), and fear (4.01%). Since we had seven emotions in total, each had 14.28% (100/7 = 14.28) chance to be chosen randomly. As a result, we only discussed the ones that exceed the level expected from a uniform distribution of responses, such as neutral (37.48%) and happiness (20.40%). We analysed and discussed each emotion separately, considering each of them as binomial data (if that emotion was chosen: 1, or if that emotion was not chosen: 0). Therefore, to analyse the data we again used logistic regressions.
For the neutral emotion we did not find a significant main effect of the group. However, we observed a significant interaction between group and musical genre: heavy metal was less likely to convey neutral emotion than jazz, but this effect was stronger in drummers (Figure 4, panel A). No other interactions reached statistical significance. Model parameters are reported in Table 1.
FIGURE 4. Interaction between Group and (A) Music genre, (B) Tempo, (C) Expressiveness, (D) Modality, (E) Level of difficulty, (F) Drummer style, on the probability to experience “neutral” emotion. Error bars represent 95% confidence intervals of the estimated probabilities.
For happiness we observed a significant main effect of the group. As can be observed from the parameters in Table 1, drummers had a higher probability of perceiving happy emotions than non-musicians, whereas non-drummer musicians perceived less happiness than non-musicians. However, none of the two parameters were significant, suggesting that the significant difference between groups can only be explained by the greater level of happiness perceived by drummers when compared to non-drummer musicians. A significant interaction between group and musical genre was observed: although happiness was more likely to be perceived for heavy metal than jazz, this difference was more prominent for non-musicians than for the other groups (Figure 5, panel A; parameters in Table 1). A significant interaction was also found between group and modality: although the video-only modality was less likely to convey happiness than the audio-only and audio-video conditions, this effect appeared stronger in non-musicians than in the other groups (Figure 5, panel D). Finally, a significant interaction between group and drummer’s style was observed with crossed arms more likely to convey happiness in drummers but not in the other two groups (Figure 5, panel F). No other significant interactions were found. Model parameters are reported in Table 1.
FIGURE 5. Interaction between Group and (A) Music genre, (B) Tempo, (C) Expressiveness, (D) Modality, (E) Level of difficulty, (F) Drummer style, on the probability to experience “happiness” emotion. Error bars represent 95% confidence intervals of the estimated probabilities.
An additional analysis was conducted on the association between years of music training and emotional perception from music. Given the limited size of the groups, we pooled drummers and non-drummer musicians together. Note that the between-group difference in terms of years of music training was negligible, ΔM = 1.75 years, t(30) = 0.59, p = 0.56. The analysis was conducted using the same (generalised) mixed-effects linear models described above, but adding years of music training as another predictor. Due to the exploratory nature of this analysis, we set a critical α = 0.005 for significance. Effect size was reported as the odds ratio for a 5-year increase in music training (OR5y) or the model parameter B (again for a 5-year increase in music training) depending on the response variable.
Out of the four response variables (perceived expressiveness, emotional valence, “neutral” emotion, and “happiness” emotion), years of music training was negatively associated with the choice of happy emotion, χ2(1) = 9.00, p = 0.003, OR5y = 0.80; this main effect was explained by an interaction with musical genre, χ2(1) = 10.88, p < 0.001, such that the negative association between years of music training and the choice of happiness as category was greater for jazz than for metal clips, OR5y = 0.78. For perceived expressiveness, we found a significant interaction between years of music training and musical genre, χ2(1) = 56.90, p < 0.001 (i.e., years of music training was negatively associated with perceived expressiveness for jazz but not for metal, B5y = -0.21), and a significant interaction between years of music training and modality, χ2(1) = 21.35, p < 0.001 (i.e., years of music training was negatively associated with perceived expressiveness for the video-only modality, B5y = -0.18, but not for the other two modalities). No other significant main effects or interactions were found.
In the current study, we asked non-musicians, drummers, and non-drummer musicians to judge a series of solo drumming improvisation clips for their level of expressiveness, their positivity (or negativity), and their portrayed emotion among seven categories (happy, sadness, anger, fear, disgust, surprise, and neutral). The clips differed in musical genre, tempo, complexity, drummer’s expressiveness, drummer’s style, and sensory modality (audio-only, video-only, and audio-video). Overall, our results showed that individual differences in musical practise and the high level of familiarity with the music instrument influence how expressiveness and emotions from rhythmic music are perceived, with few exceptions. Regardless of the type and level of musical practise, all groups were able to perceive emotion from music in line with other studies (e.g., Fredrickson, 2000; Vines et al., 2006). For example, all participants could recognise the level of expressiveness communicated by the drummer. That is, all participants, regardless of the level and type of music training, gave higher ratings of expressiveness and perceived happiness, when the performer played with maximum expressive interpretation of the music. In other words, musicianship rarely affected the perceived expressiveness, valence and emotion overall while it did often affect these judgements when specific features of the music were manipulated (e.g., music genre and modality). This lack of a main effect of musical practise is not surprising if one considers that, regardless of the type and level of musical expertise, everyone can perceive emotions from music. However, when specific musical features are manipulated, individual competences can drive the way emotions from music are perceived. That is, non-drummer musicians, drummers, and non-musicians were affected differently by changes in musical genre, tempo, and sensory modality when perceiving expressiveness and a specific emotion from drumming improvisation. For example, all musicians gave a greater weight than non-musicians to the visual information when perceiving the drummer’s expressed emotions. Finally, the effect of musicianship did change with the level of musical experience in that the perceived expressiveness and happiness decreased with increasing years of music training, specifically in the case of jazz and video-only clips.
Non-drummer musicians assigned higher ratings of expressiveness to solo drumming improvisations than non-musicians. Listening to and/or watching a solo drumming performance may be an uncommon experience especially for non-musicians. By contrast, it is likely that these performances are more familiar to non-drummer musicians, such as guitarists or bassists that are used to play together with the drummer. Playing drums or any other musical instrument did not make a difference in the judgement of expressiveness as no difference was found between non-drummer musicians and drummers. Neither did drummers differ from non-musicians, possibly because drummers, due to their background, focused mostly on the technical performance rather than its level of expressiveness.
The drummer’s performances had a similar emotional valence for non-musicians, non-drummer musicians, and drummers. All groups gave more positive than negative responses. Drummers perceived happiness more often than non-drummer musicians, which likely depends on their level of the familiarity with the played instrument. Non-drummer musicians and non-musicians perceived the same level of happiness from music performances supporting previous findings that non-musicians can readily recognise happiness from drumming improvisation (Petrini et al., 2010b) even when the drummer is not instructed to play with a specific emotion in mind. These results suggest that when listening to and/or watching musical performances from our own motor repertoire (e.g., drummers that listen to and/or observe drum performances) more positive emotion are perceived. Indeed, according to the Shared Affective Motion Experience (SAME) model, drummers watching and/or listening to drumming performance are able to access specific information at all levels of the motor hierarchy (the intention level, the goal level, the kinematic level, and the muscle level) and thus imagine emotional intention (Molnar-Szakacs and Overy, 2006; Overy and Molnar-Szakacs, 2009; Molnar-Szakacs et al., 2012). Moreover, studies comparing dancers with specific motor experience, but similar visual experience, with dance actions (Calvo-Merino et al., 2005, 2006) have shown that action representation or areas of the mirror neuron system have a purely motor response. Our results are consistent with this differentiation between visual and motor experience with the portrayed actions and suggest that purely motor experience with the instrument played can enhance the level of emotion perceived and likely cause a specific response of the limbic system (Koelsch et al., 2006; Koelsch, 2009; Petrini et al., 2011).
Despite many studies (e.g., Laukka and Gabrielsson, 2000; Juslin and Laukka, 2003; Juslin et al., 2010) suggesting that music can communicate a wide range of emotions, it was found in our research all participants, regardless of the level and the type of the music training, claimed they perceive mostly neutral and happy emotions from drumming solo improvisation. This finding replicates previous results (Petrini et al., 2010b) on drumming solo improvisation in showing that neutral and happy emotions were amongst the most perceived emotions. However, we did not find the same results for anger, which was another emotion previously shown to be recognised with high accuracy (Petrini et al., 2010b), because of the different music styles used in the present study (i.e., anger was perceived extremely rarely in jazz clips). In fact, in the present study anger was perceived in 15.80% of cases (above chance) in the heavy metal clips as compared to only 3.21% in the jazz clips, thus corroborating the results of Petrini et al. (2010b) when a heavy metal drumming style was used. Given that the musician in our study was not instructed to express specific emotions, participants may have mainly perceived happiness simply because the performer may have chosen to express that particular emotion, rather than any of the others (e.g., sad, anger, fear, disgust, or surprise). The reports of perceiving a neutral emotion could signify that rhythmic music does not convey any emotion in the majority of cases. However, based on the low percentage with which the neutral emotion was chosen (∼37%) and the high level of expressiveness and positivity perceived from the drumming clips this seems improbable. It is more likely that participants could not readily categorise the perceived emotions among the emotion categories given, with the exception of happiness. The possibility that participants could not categorise the emotions based on the choices given is supported by the reports of neutral emotion being largely given when the drummer played with minimal expressive interpretation of the music, suggesting that participants were able to differentiate between high and low emotional performances and perceive changes in expressive intentions (Davidson, 1993).
Drummers, non-drummer musicians, and non-musicians gave different ratings of expressiveness to performances dependent on musical genre. For non-drummer musicians heavy metal and jazz performances had a similar level of expressiveness, while for drummers and non-musicians heavy metal were more expressive than jazz. Moreover, drummers perceived more positive emotions in response to heavy metal than jazz performances. Non-musicians, however, also perceived happiness from heavy metal clips more often than in response to jazz performances, even if it is usually difficult to differentiate happiness from anger (Dahl and Friberg, 2007) as they have similar acoustic features (e.g., fast tempo, high pitch; Juslin, 2009). The finding that non-musicians perceived positive emotions from heavy metal is intriguing and unexpected. Indeed, previous research (Took and Weiss, 1994; Selfhout et al., 2008) often showed that heavy metal music conveys negative thoughts and feelings (e.g., aggression and hostility), and relates to low school performances, antisocial behaviour, drug use, delinquency, and suicidal acts, although this often depends on song lyrics (Anderson et al., 2003) which were not part of this study. Our results are consistent with a recent study (Sharman and Dingle, 2015) which showed that listening to heavy metal music helps fans of this genre to feel positive emotions, enhance their happiness and their well-being in general. Therefore, our findings show that heavy metal music can indeed convey positive emotions to listeners.
Although all participants, in line with previous studies where melodic music was used (e.g., Juslin, 2009), agreed that faster performances were more expressive and conveyed more positive emotions than slower performances, drummers gave higher ratings of expressiveness and perceived positive emotions more often than non-drummer musicians and non-musicians. Our results support those of Wapnick et al. (2004) who found that piano majors were more affected by changes in tempo than non-majors when they judged the tone quality and note accuracy of performances from an international piano competition. Our findings suggest that listeners who share the same type of music training experience with the performer might be more receptive to changes in emotional and sensory information than listeners with different type of musical experience.
Unsurprisingly drummers judged complex rhythms as more expressive than non-musicians. Given their long-term music training with the portrayed instrument drummers are more able to recognise technical aspects of the performance (e.g., odd times, use of the double pedal). In contrast, for non-musicians, simpler music extracts (e.g., 4/4 time) could result in greater pleasure as suggested by the inverted U-hypothesis of music for which overly simple or complex music is less preferred by non-musician listeners (De Meijer, 1989, 1991; Orr and Ohlsson, 2001).
Non-musicians perceived happiness more often for open than crossed arms styles, when compared with drummers. It is indeed well known that arms close to the body communicate negative emotion whereas raised arms communicate joy (De Meijer, 1989, 1991; Dahl and Friberg, 2007). In contrast, drummers perceived more happiness from crossed arms performance than open arms. As discussed before, sharing instrument-specific motor repertoire with the represented music performance may change how the listeners perceive emotion from specific actions (Broughton and Davidson, 2014).
We found that video-only performances were judged as less expressive and conveyed less positive emotion for non-musicians than both groups of musicians. Our results show that the sound in the musical performances was weighted more heavily by all participants than the musician’s body and facial movements when judging the level of expressiveness and positive emotion (Petrini et al., 2010b). However, our results also show that the weight of the visual information (musician’s body and facial movements) increase with music training (Vines et al., 2006). Both non-drummer musicians and drummers gave greater weighting to the visual information than non-musicians and this is likely a consequence of their similar level of visual experience with the drumming performance (Calvo-Merino et al., 2005, 2006). This suggests that both specific motor and visual experience with the instrument enhances the ability to use sensory information when perceiving emotions from music (e.g., Petrini et al., 2009a,b, 2011; Lee and Noppeney, 2011).
The greater the experience of the listeners/viewers (e.g., drummers and non-drummer musicians with many years of music training) the less they perceived expressiveness from jazz, video-only clips and happiness from jazz music, most likely because they focused on the technical features of these drumming performances. However, in previous studies (e.g., Bhatara et al., 2011; Lima and Castro, 2011a; Castro and Lima, 2014) where melodic music was used it was found that years of music training were associated with enhanced recognition of musical expressiveness and emotions from audio-only music excerpts. These contrasting results might depend on the use of rhythmic, rather than melodic, music in the current study and on the manipulations of additional music features such as musical genres and sensory modalities, which were not previously investigated (e.g., Bhatara et al., 2011; Lima and Castro, 2011a; Castro and Lima, 2014). Indeed, the results between our study and the mentioned research align for the heavy metal style of music and when audio-only or audio-video performances were used, both finding no significant decrease in perceived expressiveness and emotion with increasing years of musical experience.
Our results can contribute to the suggested theoretical framework named BRECVEMA (Juslin, 2013). This model unifies a set of eight mechanisms of emotion induction (e.g., Brain stem reflex, Rhythmic entrainment, Evaluative conditioning), that mediate between different musical features and the resulting induced emotions in listeners/viewers. This mediation mechanism can explain both why a given event arouses an emotion and why the aroused emotion is of a certain kind. For example, the Rhythmic entrainment mechanism explains that a piece of music evokes an emotion because a powerful rhythm in music influences some internal bodily rhythm in the listener (e.g., heart rate). Each of these mechanisms can induce emotion from music and in many instances more than one mechanism is involved when emotion is derived from music (Juslin et al., 2013). Our results examined emotional perception and thus we cannot definitely conclude that long-term music training or expertise would affect the felt emotions, however, our results do suggest that long-term practise or expertise may be an additional emotional inducement mechanism, since musicians did perceive higher levels of emotion and expressiveness than non-musicians in many instances. Future studies could examine this possibility by using both melodic and rhythmic music and testing emotion inducement in individuals with different levels and types of musical training. For example, reports of felt emotions as well as physiological responses could be used in future studies to examine whether the perceived expressiveness and/or emotion is also felt (Juslin and Laukka, 2004).
Finally, our findings have potential implications for music therapy and clinical practise. They suggest that drumming could be used to enhance the perception of happiness in young children and infants that find judging emotion difficult based on other non-verbal cues (e.g., facial expression, voice prosody, and body movements). This is based on the finding that drums are an approachable instrument that have been shown to affect cognitive abilities from an early age (Gerson et al., 2015). Also, since practising a music instrument has been shown to produce more “joy,” “emotional synchronicity,” and “initiation of engagement” in autistic children (Kim et al., 2009), a period of training with drums could also have positive effects on these individuals’ emotional state and understanding. Finally, since our results show that long-term music training enhances the ability to perceive emotions from facial and body information during the musician’s performance, training with a music instrument (not specifically the drums) would be beneficial to individuals that have difficulties in using this type of information when judging emotions in others.
Our study is a quasi-experimental because random assignment of the participants to the three groups (non-musicians, drummers, and non-drummer musicians) was not possible, since each group consisted of people with specific individual differences. However, in future researches, non-musicians, after a period of music training, could be randomly assigned to various groups in order to support our results by using an experimental method. Moreover, we did not match the groups for other relevant variables (e.g., education, working memory capacity, and auditory abilities). Since it is well established that cognitive (e.g., working memory) or auditory (e.g., pitch and rhythm perception) abilities can be affected by musical training (e.g., Moreno et al., 2011; Slevc et al., 2016), future studies should better match participants for possible confound variables.
In addition, there are other limitations to this study that should be acknowledged. Firstly, we examined only a rhythmic instrument with the decision to study drums based on evidence that the upper-body movements permitted by drumming are more evident and less expressively restricted than those permitted by other instruments (Petrini et al., 2010b). Additionally, playing drums could be a very effective means of therapy (Overy, 2003; Zentner and Eerola, 2010; Wan et al., 2011; Gerson et al., 2015). However, it is unknown whether our results are transferrable to other percussion and melodic musical instruments, for example an instrument like piano (Lee and Noppeney, 2011) that allows a good range and variety of upper-body movement while maintaining the melodic component of the music would be essential in future studies to understand the effects of long-term music training on perceived emotion from melodic and rhythmic music. However, this limitation is also one of the novelties of the present study as knowing that training with a purely rhythmic instrument can affect how we perceive emotion from music is important and encouraging in terms of clinical applications using rhythmic music instruments.
We chose to use a single drummer as performer as we had many factors to examine and having one musician limited the duration of the experiment, which would otherwise have been unsustainable, especially when examining perceived emotion. However, this is a main limitation of our study, as other performers may have chosen to improvise in different ways, thereby leading to different expressive and emotional content. Besides, as mentioned above, no instructions concerning specific emotional intentions were given to the drummer. We cannot rule out that these choices may have influenced the results. Indeed, only two emotion categories (neutral and happy) were selected above chance levels. It seems difficult to discern whether or not neutral and happy rates reflect a bias toward using these emotion categories more often than others (Isaacowitz et al., 2007). Future studies could focus on the musical features where differences were found between musicians and non-musicians (e.g., musical genre, tempo, sensory modality) and include a higher number of performers.
Finally, the forced-choice emotion perception task (e.g., asking participants to choose the perceived emotion by selecting only one from seven basic emotion categories) is a limitation because it can have an impact on statistical analyses, such as increased co-linearity (if one response is chosen, the others are not) generating high perception rates artificially (Cornwell and Dunlap, 1994; Frank and Stennett, 2001). Future studies could include more complex emotions such as pride, jealousy and contentment, other scales such as the Self-Assessment Manikin (SAM; which makes use of a non-verbal pictorial assessment technique rather than emotional labels), and/or allow participants to choose more than one emotional category.
The presented research contributes to the understanding of the effect of long-term musical training on emotional perception by showing that individual differences in musical training (e.g., motor and/or visual experience with a specific instrument) influence the way expressiveness and emotions from solo drumming improvisations are perceived. Non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance such as musical genre, tempo, and modality, when perceiving expressiveness, valence, and emotion from drumming improvisation. Therefore, long-term musical training shapes several cognitive and perceptual abilities (e.g., Petrini et al., 2009a,b, 2010a; Lee and Noppeney, 2011; Lee and Noppeney, 2014), and also emotional processing from purely rhythmic music. This has potential implications for music therapy, clinical practise, and theory of emotion inducement from music.
MDM, MG, and KP designed the experiments, wrote and reviewed the manuscript. ET performed the statistical analysis.
Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We would like to thank Scott Ramsay for his help with the English revision of this manuscript.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2018.02168/full#supplementary-material
VIDEO 1 | Example of audiovisual clip used in the present study: heavy metal, simple rhythm, slow tempo, maximum expressive interpretation, crossed arms.
VIDEO 2 | Example of audiovisual clip used in the present study: jazz, complex rhythm, fast tempo, minimal expressive interpretation, open arms.
Abeles, H. F., and Chung, J. W. (1996). “Responses to music,” in Handbook of Music Psychology, 2nd Edn, eds D. A. Hodges (San Antonio, TX: IMR Press), 285–342.
Ambert-Dahan, E., Giraud, A. L., Sterkers, O., and Samson, S. (2015). Judgment of musical emotions after cochlear implantation in adults with progressive deafness. Front. Psychol. 6:181. doi: 10.3389/fpsyg.2015.00181
Anderson, C. A., Carnajey, N. L., and Eubanks, J. (2003). Exposure to violent media: the effect of songs with violent lyrics on aggressive thoughts and feelings. J. Pers. Soc. Psychol. 84, 960–967. doi: 10.1037/0022-3518.104.22.1680
Balkwill, L.-L., and Thompson, W. F. (1999). A cross-cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music Percept. 17, 43–64. doi: 10.2307/40285811
Bates, D., Maechler, M., Bolker, B., and Walker, S. (2015). Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, 1–48. doi: 10.18637/jss.v067.i01
Bhatara, A., Tirovolas, A. K., Duan, L. M., Levy, B., and Levitin, D. J. (2011). Perception of emotional expression in musical performance. J. Exp. Psychol. Hum. Percept. Perform. 37, 921–934. doi: 10.1037/a0021922
Broughton, M., and Davidson, J. W. (2014). Action and familiarity effects on self and other expert musicians’ Laban effort-shape analyses of expressive bodily behaviors in instrumental music performance: a case study approach. Front. Psychol. 5:1201. doi: 10.3389/fpsyg.2014.01201
Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E., and Haggard, P. (2005). Action observation and acquired motor skills: an FMRI study with expert dancers. Cereb. Cortex 15, 1243–1249. doi: 10.1093/cercor/bhi007
Calvo-Merino, B., Grèzes, J., Glaser, D. E., Passingham, R. E., and Haggard, P. (2006). Seeing or doing? Influence of visual and motor familiarity in action observation. Curr. Biol. 16, 1905–1910. doi: 10.1016/j.cub.2006.07.065
Campbell, I. G. (1942). Basal emotional patterns expressible in music. Am. J. Psychol. 55, 1–17. doi: 10.2307/1417020
Castro, S. L., and Lima, C. F. (2014). Age and musical expertise influence emotion recognition in music. Music Percept. 32, 125–142. doi: 10.1525/mp.2014.32.2.125
Clark, D. M. (1983). On the induction of depressed mood in the laboratory: evaluation and comparison of the Velten and musical procedures. Adv. Behav. Res. Ther. 5, 27–49. doi: 10.1016/0146-6402(83)90014-0
Clynes, M. (1977). Sentics: The Touch of Emotions. New York, NY: Doubleday.
Collignon, O., Girard, S., Gosselin, F., Roy, S., Saint-Amour, D., and Lassonde, M. (2008). Audio-visual integration of emotion expression. Brain Res. 1242, 126–135. doi: 10.1016/j.brainres.2008.04.023
Cornwell, J. M., and Dunlap, W. P. (1994). On the questionable soundness of factoring ipsative data: a respon-se to Saville and Wilson (1991). J. Occupat. Organ. Psychol. 67, 89–100. doi: 10.1111/j.2044-8325.1994.tb00553.x
Dahl, S., and Friberg, A. (2007). Visual perception of expressiveness in musicians’ body movements. Music Percept. 24, 433–454. doi: 10.1525/mp.2007.24.5.433
Dalla Bella, S., Peretz, I., Rousseau, L., and Gosselin, N. (2001). A developmental study of the affective value of tempo and mode in music. Cognition. 80, B1–B10. doi: 10.1016/S0010-0277(00)00136-0
Davidson, J. W. (1993). Visual perception of performance manner in the movements of solo musicians. Psychol. Music 21, 103–113. doi: 10.1177/030573569302100201
De Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotion. J. Nonverb. Behav. 13, 247–268. doi: 10.1007/BF00990296
De Meijer, M. (1991). The attribution of aggression and grief to body movements: the effects of sex-stereotypes. Eur. J. Soc. Psychol. 21, 249–259. doi: 10.1002/ejsp.2420210307
Escoffier, N., Zhong, J., Schirmer, A., and Qiu, A. (2013). Emotional expressions in voice and music: same code, same effect? Hum. Brain Mapp. 34, 1796–1810. doi: 10.1002/hbm.22029
Fox, J. (2003). Effect displays in R for generalised linear models. J. Stat. Softw. 8, 1–27. doi: 10.18637/jss.v008.i15
Frank, M. G., and Stennett, J. (2001). The forced-choice paradigm and the perception of facial expressions of emotion. J. Pers. Soc. Psychol. 80, 75–85. doi: 10.1037/0022-3522.214.171.124
Fredrickson, W. E. (2000). Perception of tension in music: musicians versus nonmusicians. J. Music Ther. 37, 40–50. doi: 10.1093/jmt/37.1.40
Gabrielsson, A. (2001). “Emotions in strong experiences with music,” in Music and Emotion: Theory and Research, eds P. N. Juslin and J. A. Sloboda (New York, NY: Oxford University Press), 431–449.
Gabrielsson, A., and Juslin, P. N. (1996). Emotional expression in music performance: between the performer’s intention and the listener’s experience. Psychol. Music 24, 68–91. doi: 10.1177/0305735696241007
Gabrielsson, A., and Juslin, P. N. (2003). “Emotional expression in music,” in Handbook of Affective Sciences, eds R. J. Davidson, K. R. Scherer, and H. H. Goldsmith (New York, NY: Oxford University Press), 503–534.
Gabrielsson, A., and Lindström, E. (1995). Emotional expression in synthesizer and sentograph performance. Psychomusicology. 14, 94–116. doi: 10.1037/h0094089
Gagnon, L., and Peretz, I. (2003). Mode and tempo relative contributions to ‘happy–sad’ judgments in equitone melodies. Cognit. Emot. 17, 25–40. doi: 10.1080/026999303022792003
Gerardi, G. M., and Gerken, L. (1995). The development of affective response to modality and melodic contour. Music Percept. 12, 279–290. doi: 10.2307/40286184
Gerson, S. A., Bekkering, H., and Hunnius, S. (2015). Short-term motor training, but not observational training, alters neurocognitive mechanisms of action processing in infancy. J. Cogn. Neurosci. 27, 1207–1214. doi: 10.1162/jocn_a_00774
Gosselin, N., Paquette, S., and Peretz, I. (2015). Sensitivity to musical emotions incongenital amusia. Cortex 71, 171–182. doi: 10.1016/j.cortex.2015.06.022
Hevner, K. (1935). The affective character of the major and minor modes in music. Am. J. Psychol. 47, 103–118. doi: 10.2307/1416710
Hevner, K. (1937). The affective value of pitch and tempo in music. Am. J. Psychol. 49, 621–630. doi: 10.2307/1416385
Isaacowitz, D. M., Löckenhoff, C. E., Lane, R. D., Wright, R., Sechrest, L., Riedel, R., et al. (2007). Age differences in recognition of emotion in lexical stimuli and facial expressions. Psychol. Aging 22, 147–159. doi: 10.1037/0882-79126.96.36.199
Juslin, P. N. (1997a). Can results from studies of perceived expression in musical performances be generalized across response formats? Psychomusicology 16, 77–101. doi: 10.1037/h0094065
Juslin, P. N. (1997b). Emotional communication in music performance: a functionalist perspective and some data. Music Percept. 14, 383–418. doi: 10.2307/40285731
Juslin, P. N. (2001). “Communicating emotion in music performance: a review and a theoretical framework,” in Music and Emotion: Theory and Research, eds P. N. Juslin and J. A. Sloboda (New York, NY: Oxford University Press),309–337.
Juslin, P. N. (2009). “Music (emotional effects),” in Oxford Companion to Emotion and the Affective Sciences, eds D. Sander and K. R. Scherer (New York, NY: Oxford University Press), 269–271.
Juslin, P. N. (2013). From everyday emotions to aesthetic emotions: towards a unified theory of musical emotions. Phys. Life Rev. 10, 235–266. doi: 10.1016/j.plrev.2013.05.008
Juslin, P. N., Harmat, L., and Eerola, T. (2013). What makes music emotionally significant? exploring the underlying mechanisms. Psychol. Music 42, 599–623. doi: 10.1177/0305735613484548
Juslin, P. N., and Laukka, P. (2003). Communication of emotions in vocal expression and music performance: different channels, same code? Psychol. Bull. 129, 770–814. doi: 10.1037/0033-2909.129.5.770
Juslin, P. N., and Laukka, P. (2004). Expression, perception, and induction of musical emotions: a review and a questionnaire study of everyday listening. J. New Music Res. 33, 217–238. doi: 10.1080/0929821042000317813
Juslin, P. N., Liljeström, S., Västfjäll, D., Barradas, G., and Silva, A. (2008). An experience sampling study of emotional reactions to music: listener, music, and situation. Emotion 8, 668–683. doi: 10.1037/a0013505
Juslin, P. N., Liljeström, S., Västfjäll, D., and Lundqvist, L.-O. (2010). “How does music evoke emotions? Exploring the underlying mechanisms,” in Handbook of Music and Emotion: Theory, Research, Applications, eds P. N. Juslin and J. A. Sloboda (Oxford: Oxford University Press), 605–642.
Juslin, P. N., and Lindström, E. (2003). Musical expression of emotions: modeling composed and performed features. Paper presented at the 5th Conference of the European Society for the Cognitive Science of Music, Hannover: ESCOM.
Juslin, P. N., and Madison, G. (1999). The role of timing patterns in recognition of emotional expression from musical performance. Music Percept. 17, 197–221. doi: 10.2307/40285891
Juslin, P. N., and Sloboda, J. A. (eds.). (2010). Handbook of Music and Emotion: Theory, Research, Applications. Oxford: Oxford University Press.
Juslin, P. N., and Västfjäll, D. (2008). Emotional responses to music: the need to consider underlying mechanisms. Behav. Brain Sci. 31, 559–575. doi: 10.1017/S0140525X08005293
Kim, J., Wigram, T., and Gold, C. (2009). Emotional, motivational and interpersonal responsiveness of children with autism in improvisational music therapy. Autism 13, 389–409. doi: 10.1177/1362361309105660
Koelsch, S. (2009). A neuroscientific perspective on music therapy. Ann. N. Y. Acad. Sci. 1169, 374–384. doi: 10.1111/j.1749-6632.2009.04592.x
Koelsch, S., Fritz, T., von Cramon, D. Y., Müller, K., and Friederici, A. D. (2006). Investigating emotion with music: an fMRI study. Hum. Brain Mapp. 27, 239–250. doi: 10.1002/hbm.20180
Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Can. J. Exp. Psychol. 51, 336–353. doi: 10.1037/1196-19188.8.131.526
Laukka, P. (2007). Uses of music and psychological well-being among the elderly. J. Happiness Stud. 8, 215–241. doi: 10.1007/s10902-006-9024-3
Laukka, P., and Gabrielsson, A. (2000). Emotional expression in drumming performance. Psychol. Music 28, 181–189. doi: 10.1177/0305735600282007
Lee, H., and Noppeney, U. (2011). Long-term music training tunes how the brain temporally binds signals from multiple senses. Proc. Natl. Acad. Sci. U.S.A. 108, 1441–1450. doi: 10.1073/pnas.1115267108
Lee, H., and Noppeney, U. (2014). Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music. Front. Psychol. 5:868. doi: 10.3389/fpsyg.2014.00868
Lima, C. F., and Castro, S. L. (2011a). Emotion recognition in music changes across the adult life span. Cognit. Emot. 25, 585–598. doi: 10.1080/02699931.2010.502449
Lima, C. F., and Castro, S. L. (2011b). Speaking to the trained ear: musical expertise enhances the recognition of emotions in speech prosody. Emotion 11, 1021–1031. doi: 10.1037/a0024521
Lu, Y., Paraskevopoulos, E., Kuchenbuch, A., Herholz, S. C., and Pantev, C. (2014). Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG). PLoS One 9:e90686. doi: 10.1371/journal.pone.0090686
Molnar-Szakacs, I., Green Assuied, V., and Overy, K. (2012). “Shared affective motion experience (SAME) and creative, interactive music therapy,” in Musical Imaginations: Multidisciplinary Perspectives on Creativity, Performance and Perception, eds D. Hargreaves, D. Miell, and R. MacDonald (Oxford: Oxford University Press), 313–331. doi: 10.1093/acprof:oso/9780199568086.003.0020
Molnar-Szakacs, I., and Overy, K. (2006). Music and mirror neurons: from motion to ‘e’motion. Soc. Cognit. Affect. Neurosci. 1, 235–241. doi: 10.1093/scan/nsl029
Moreno, S., Bialystok, E., Barac, R., Schellenberg, E. G., Cepeda, N. J., and Chau, T. (2011). Short-term music training enhances verbal intelligence and executive function. Psychol. Sci. 22, 1425–1433. doi: 10.1177/0956797611416999
Orr, M. G., and Ohlsson, S. (2001). The relationship between musical complexity and liking in jazz and bluegrass. Psychol. Music. 29, 108–127. doi: 10.1177/0305735601292002
Overy, K. (2003). Dyslexia and music. From timing deficits to musical intervention. Ann. N.Y. Acad. Sci. 999, 497–505. doi: 10.1196/annals.1284.060
Overy, K., and Molnar-Szakacs, I. (2009). Being together in time: musical experience and mirror neuron system. Music Percept. Interdiscipl. J. 26, 489–504. doi: 10.1525/mp.2009.26.5.489
Paquette, S., Takerkart, S., Saget, S., Peretz, I., and Belin, P. (2018). Cross-classification of musical and vocal emotions in the auditory cortex. Ann. N. Y. Acad. Sci. 1423, 329–337. doi: 10.1111/nyas.13666
Peretz, I., Gagnon, L., and Bouchard, B. (1998). Music and emotion: perceptual determinants, immediacy, and isolation after brain damage. Cognition 68, 111–141. doi: 10.1016/S0010-0277(98)00043-2
Peretz, I., Vuvan, D., Lagrois, M.-E., and Armony, J. L. (2015). Neural overlap in processing music and speech. Philos. Trans. R. Soc. Lond. B Biol. Sci. 370:20140090. doi: 10.1098/rstb.2014.0090
Petrini, K., Dahl, S., Rocchesso, D., Waadeland, C. H., Avanzini, F., Puce, A., et al. (2009a). Multisensory integration of drumming actions: musical expertise affects perceived audiovisual asynchrony. Exp. Brain Res. 198, 339–352. doi: 10.1007/s00221-009-1817-2
Petrini, K., Russell, M., and Pollick, F. (2009b). When knowing can replace seeing in audiovisual integration of actions. Cognition 110, 432–439. doi: 10.1016/j.cognition.2008.11.015
Petrini, K., Holt, S. P., and Pollick, F. (2010a). Expertise with multisensory events eliminates the effect of biological motion rotation on audiovisual synchrony perception. J. Vis. 10, 1–14. doi: 10.1167/10.5.2
Petrini, K., McAleer, P., and Pollick, F. (2010b). Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence. Brain Res. 1323, 139–148. doi: 10.1016/j.brainres.2010.02.012
Petrini, K., Pollick, F. E., Dahl, S., McAleer, P., McKay, L., Rocchesso, D., et al. (2011). Action expertise reduces brain activity for audiovisual matching actions: an fMRI study with expert drummers. Neuroimage 56, 1480–1492. doi: 10.1016/j.neuroimage.2011.03.009
Pinheiro, A. P., Vasconcelos, M., Dias, M., Arrais, N., and Gonçalves, Ó. F. (2015). The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. Brain Lang. 140, 24–34. doi: 10.1016/j.bandl.2014.10.009
Pinheiro, J. C., and Bates, D. M. (2000). Mixed-Effects Models in S and S-PLUS. New York, NY: Springer. doi: 10.1007/978-1-4419-0318-1
Piwek, L., Pollick, F., and Petrini, K. (2015). Audiovisual integration of emotional signals from others’ social interactions. Front. Psychol. 6:611. doi: 10.3389/fpsyg.2015.00611
Rigg, M. (1964). The mood effects of music: a comparison of data from four investigators. J. Psychol. 58, 427–438. doi: 10.1080/00223980.1964.9916765
Scherer, K. R. (2003). “Why music does not produce basic emotions: a plea for a new approach to measuring emotional effects of music,” in Proceedings of the Stockholm Music Acoustics Conference, ed. R. Bresin (Stockholm: Royal Institute of Technology), 25–28.
Selfhout, M. H., Delsing, M. J., ter Bogt, T. F., and Meeus, W. H. (2008). Heavy metal and hip-hop style preferences and externalizing problem behavior: a two-wave longitudinal study. Youth Soc. 39, 435–452. doi: 10.1177/0044118X07308069
Sharman, L., and Dingle, G. A. (2015). Extreme metal music and anger processing. Front. Hum. Neurosci. 9:272. doi: 10.3389/fnhum.2015.00272
Slevc, L. R., Davey, N. S., Buschkuehl, M., and Jaeggi, S. M. (2016). Tuning the mind: exploring the connections between musical ability and executive functions. Cognition 152, 199–211. doi: 10.1016/j.cognition.2016.03.017
Sloboda, J. A., and Juslin, P. N. (2001). “Psychological perspectives on music and emotion,” in Music and Emotion: Theory and Research, eds P. N. Juslin and J. A. Sloboda (New York, NY: Oxford University Press), 71–104.
Strait, D. L., Kraus, N., Skoe, E., and Ashley, R. (2009). Musical experience and neural efficiency: effects of training on subcortical processing of vocal expressions of emotion. Eur. J. Neurosci. 29, 661–668. doi: 10.1111/j.1460-9568.2009.06617.x
Thompson, W. F., Russo, F. A., and Quinto, L. (2008). Audio-visual integration of emotional cues in song. Cognit. Emot. 22, 1457–1470. doi: 10.1080/02699930701813974
Thompson, W. F., Schellenberg, E. G., and Husain, G. (2004). Decoding speech prosody: do music lessons help? Emotion 4, 46–64. doi: 10.1037/1528-3542.4.1
Took, K. J., and Weiss, D. S. (1994). The relationship between heavy metal and rap music and adolescent turmoil: real or abstract? Adolescence 29, 613–623.
Vines, B. W., Krumhansl, C. L., Wanderley, M. M., and Levitin, D. J. (2006). Cross-modal interactions in the perception of musical performance. Cognition 101, 80–113. doi: 10.1016/j.cognition.2005.09.003
Wan, C. Y., Bazen, L., Baars, R., Libenson, A., Zipse, L., Zuk, J., et al. (2011). Auditory-motor mapping training as an intervention to facilitate speech output in non-verbal children with autism: a proof of concept study. PLoS One 6:e25505. doi: 10.1371/journal.pone.0025505
Wapnick, J., Ryan, C., Lacaille, N., and Darrow, A. A. (2004). Effects of selected variables on musicians’ ratings of high-level piano performances. Int. J. Music Educ. 22, 7–20. doi: 10.1177/0255761404042371
Zentner, M., and Eerola, T. (2010). Rhythmic engagement with music in infancy. Proc. Natl. Acad. Sci. U.S.A. 107, 5768–5773. doi: 10.1073/pnas.1000121107
Zentner, M., Grandjean, D., and Scherer, K. R. (2008). Emotions evoked by the sound of music: characterization, classification, and measurement. Emotion 8, 494–521. doi: 10.1037/1528-35184.108.40.2064
Zentner, M., Meylan, S., and Scherer, K. (2000). Exploring musical emotions across five genres of music. Paper presented at the 6th Conference of the International Society for Music Perception and Cognition (IMPC), Keele, England.
Keywords: music training, expressiveness, emotion perception, valence, drumming
Citation: Di Mauro M, Toffalini E, Grassi M and Petrini K (2018) Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation. Front. Psychol. 9:2168. doi: 10.3389/fpsyg.2018.02168
Received: 02 June 2018; Accepted: 22 October 2018;
Published: 09 November 2018.
Edited by:Petri Laukka, Stockholm University, Sweden
Reviewed by:Cesar F. Lima, Instituto Universitário de Lisboa (ISCTE), Portugal
Sébastien Paquette, Harvard Medical School, United States
Copyright © 2018 Di Mauro, Toffalini, Grassi and Petrini. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Martina Di Mauro, email@example.com Massimo Grassi, firstname.lastname@example.org