Expressed music mood classification compared with valence and arousal ratings
14 pages
English

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris

Expressed music mood classification compared with valence and arousal ratings

-

Découvre YouScribe en t'inscrivant gratuitement

Je m'inscris
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus
14 pages
English
Obtenez un accès à la bibliothèque pour le consulter en ligne
En savoir plus

Description

Mood is an important aspect of music and knowledge of mood can be used as a basic feature in music recommender and retrieval systems. A listening experiment was carried out establishing ratings for various moods and a number of attributes, e.g., valence and arousal. The analysis of these data covers the issues of the number of basic dimensions in music mood, their relation to valence and arousal, the distribution of moods in the valence–arousal plane, distinctiveness of the labels, and appropriate (number of) labels for full coverage of the plane. It is also shown that subject-averaged valence and arousal ratings can be predicted from music features by a linear model.

Sujets

Informations

Publié par
Publié le 01 janvier 2012
Nombre de lectures 11
Langue English

Extrait

den Brinker et al. EURASIP Journal on Audio, Speech, and Music Processing 2012, 2012 :24 http://asmp.eurasipjournals.com/content/2012/1/24
R E S E A R C H Open Access Expressed music mood classification compared with valence and arousal ratings Bert den Brinker 1* , Ralph van Dinther 1 and Janto Skowronek 2
Abstract Mood is an important aspect of music and knowledge of mood can be used as a basic feature in music recommender and retrieval systems. A listening experiment was carried out establishing ratings for various moods and a number of attributes, e.g., valence and arousal. The analysis of these data covers the issues of the number of basic dimensions in music mood, their relation to valence and arousal, the distribution of moods in the valence–arousal plane, distinctiveness of the labels, and appropriate (number of) labels for full coverage of the plane. It is also shown that subject-averaged valence and arousal ratings can be predicted from music features by a linear model. Keywords: Music, Mood, Valence, Arousal
Introduction This article describes the experiment and the analysis of Music recommendation and retrieval is of interest due to the collected data. The analysis comprises the fundamen-the increasing amount of audio data available to the aver- tal mood dimensions [6], comparison of these dimensions age consumer. Experimental data on similarity in mood to valence and arousal, coverage of the valence and arousal of different songs can be instrumental in defining musical plane, comparison of mood labels in the valence–arousal distance measures [1,2] and would enable the definition of plane and ratings for affect words [16,17] and the pre-prototypical songs (or song features) for various moods. dictability of the valence and arousal ratings from a set of These latter can then be used as the so-called mood pre- music features. The latter is of interest since predictabil-sets in music recommendation systems. With this in mind, ity would imply the possibility of an automatic valence we defined an experiment to collect the relevant data. In and arousal rating which presumably could be used as view of the mentioned applications, we are interested in a basis for mood annotation. To study the predictability, the perceived song mood (not the induced mood), anno- we use music features determined from the audio sig-tation per song (not per part of a song), and annotation by nal. These include spectro-temporal features derived from average users (as opposed to expert annotators). Further- Mel-frequency cepstral coefficients (MFCCs) as well as more, the test should be executed with a sufficient amount features based on the statistics of tonality, rhythm, and of participants as well as a good cross-section of music percussiveness [15]. with clear moods covering the full range and, obviously, Before describing the experiment and the analysis, we a proper set of mood labels (easy-to-use and discrimina- would like to comment on our terminology. In music tive). The data collected in earlier studies on music mood research, it is common to categorize music according to [3-12] only partially meet these requirements. mood. In our experiment, we also used the term mood. In Part of the knowledge (mood labels, song selection, emotion research, there is a clear tendency to distinguish interface) used to define the experiment stems from ear- between emotion and mood, where the former is associ-lier experience gained in this area [13-15]. Valence and ated with a shorter timescale than the latter. Such distinc-arousal ratings were included since mood is assumed to tion is virtually absent in music research [2]. In view of the be mainly governed by these two dimensions [1,2,16,17]. fact that we are looking for full song annotation and a full song has a somewhat larger time stretch, the term mood is *Correspondence: bert.den.brinker@philips.com 1Philips Research, High Tech Campus 36, NL-5656 AE Eindhoven, The probably the better option. We will therefore use the term Netherlands mood for the music categorization throughout this article. Full list of author information is available at the end of the article Only in Section “Comparison with affect word scaling”, © 2012 den Brinker et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents