Emotion identification system for musical tunes based on characteristics of acoustic signal data

Tatiana Endrjukaite, Yasushi Kiyoki

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72% precision ratio and show a positive trend of system's efficiency when database size is increasing.

Original languageEnglish
Title of host publicationFrontiers in Artificial Intelligence and Applications
PublisherIOS Press
Pages88-107
Number of pages20
Volume272
ISBN (Print)9781614994718
DOIs
Publication statusPublished - 2014
Event24th International Conference on Information Modelling and Knowledge Bases, EJC 2014 - Kiel, Germany
Duration: 2014 Jun 32014 Jun 6

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume272
ISSN (Print)09226389

Other

Other24th International Conference on Information Modelling and Knowledge Bases, EJC 2014
CountryGermany
CityKiel
Period14/6/314/6/6

Fingerprint

Identification (control systems)
Acoustics
Experiments

Keywords

  • emotions
  • music
  • repetitions
  • tune's internal homogeneity
  • tune's thumbnail

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Endrjukaite, T., & Kiyoki, Y. (2014). Emotion identification system for musical tunes based on characteristics of acoustic signal data. In Frontiers in Artificial Intelligence and Applications (Vol. 272, pp. 88-107). (Frontiers in Artificial Intelligence and Applications; Vol. 272). IOS Press. https://doi.org/10.3233/978-1-61499-472-5-88

Emotion identification system for musical tunes based on characteristics of acoustic signal data. / Endrjukaite, Tatiana; Kiyoki, Yasushi.

Frontiers in Artificial Intelligence and Applications. Vol. 272 IOS Press, 2014. p. 88-107 (Frontiers in Artificial Intelligence and Applications; Vol. 272).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Endrjukaite, T & Kiyoki, Y 2014, Emotion identification system for musical tunes based on characteristics of acoustic signal data. in Frontiers in Artificial Intelligence and Applications. vol. 272, Frontiers in Artificial Intelligence and Applications, vol. 272, IOS Press, pp. 88-107, 24th International Conference on Information Modelling and Knowledge Bases, EJC 2014, Kiel, Germany, 14/6/3. https://doi.org/10.3233/978-1-61499-472-5-88
Endrjukaite T, Kiyoki Y. Emotion identification system for musical tunes based on characteristics of acoustic signal data. In Frontiers in Artificial Intelligence and Applications. Vol. 272. IOS Press. 2014. p. 88-107. (Frontiers in Artificial Intelligence and Applications). https://doi.org/10.3233/978-1-61499-472-5-88
Endrjukaite, Tatiana ; Kiyoki, Yasushi. / Emotion identification system for musical tunes based on characteristics of acoustic signal data. Frontiers in Artificial Intelligence and Applications. Vol. 272 IOS Press, 2014. pp. 88-107 (Frontiers in Artificial Intelligence and Applications).
@inproceedings{55bb7acf76a442389b4487d7a124c1e2,
title = "Emotion identification system for musical tunes based on characteristics of acoustic signal data",
abstract = "We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72{\%} precision ratio and show a positive trend of system's efficiency when database size is increasing.",
keywords = "emotions, music, repetitions, tune's internal homogeneity, tune's thumbnail",
author = "Tatiana Endrjukaite and Yasushi Kiyoki",
year = "2014",
doi = "10.3233/978-1-61499-472-5-88",
language = "English",
isbn = "9781614994718",
volume = "272",
series = "Frontiers in Artificial Intelligence and Applications",
publisher = "IOS Press",
pages = "88--107",
booktitle = "Frontiers in Artificial Intelligence and Applications",

}

TY - GEN

T1 - Emotion identification system for musical tunes based on characteristics of acoustic signal data

AU - Endrjukaite, Tatiana

AU - Kiyoki, Yasushi

PY - 2014

Y1 - 2014

N2 - We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72% precision ratio and show a positive trend of system's efficiency when database size is increasing.

AB - We design and implement a music-tune analysis system to realize automatic emotion identification and prediction based on acoustic signal data. To compute physical elements of music pieces we define three significant tunes parameters. These are: repeated parts or repetitions inside a tune, thumbnail of a music piece, and homogeneity pattern of a tune. They are significant, because they are related to how people perceive music pieces. By means of these three parameters we can express the essential features of emotional-aspects of each piece. Our system consists of music-tune features database and computational mechanism for comparison between different tunes. Based on Hevner's emotions adjectives groups we created a new way of emotion presentation on emotion's plane with two axes: activity and happiness. That makes it possible to determine perceived emotions of listening to a tune and calculate adjacent emotions on a plane. Finally, we performed a set of experiments on western classical and popular music pieces, which presented that our proposed approach reached 72% precision ratio and show a positive trend of system's efficiency when database size is increasing.

KW - emotions

KW - music

KW - repetitions

KW - tune's internal homogeneity

KW - tune's thumbnail

UR - http://www.scopus.com/inward/record.url?scp=84922572014&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84922572014&partnerID=8YFLogxK

U2 - 10.3233/978-1-61499-472-5-88

DO - 10.3233/978-1-61499-472-5-88

M3 - Conference contribution

AN - SCOPUS:84922572014

SN - 9781614994718

VL - 272

T3 - Frontiers in Artificial Intelligence and Applications

SP - 88

EP - 107

BT - Frontiers in Artificial Intelligence and Applications

PB - IOS Press

ER -