Original Research ARTICLE
A computational model of immanent accent salience in tonal music
- 1Institut Pasteur, France
- 2Zentrum für Systematische Musikwissenschaft, Universität Graz, Austria
- 3Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Sweden
Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments respectively involving 249 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62.When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.
Keywords: immanent accents, salience, Music expression, music analysis, computational modeling
Received: 16 Jul 2018;
Accepted: 01 Feb 2019.
Edited by:Aaron Williamon, Royal College of Music, United Kingdom
Reviewed by:Sergio I. Giraldo, Universidad Pompeu Fabra, Spain
Miguel Molina-Solana, Imperial College London, United Kingdom
Copyright: © 2019 Bisesi, Friberg and Parncutt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Dr. Erica Bisesi, Institut Pasteur, Paris, 75015, Île-de-France, France, firstname.lastname@example.org