Emotions recognition system for acoustic music data based on human perception features

Tatiana Endrjukaite, Yasushi Kiyoki

研究成果: Conference contribution

1 被引用数 (Scopus)

抄録

Music plays an important role in the human's life. It is not only a set of sounds-music evokes emotions subjectively perceived by listeners. The growing amount of audio data wakes up a need for content-based searching. Traditionally, tunes information has been retrieved based on a reference information, for example, the title of a tune, the name of an artist, the genre and so on. When users would like to try to find music pieces in a specific mood such standard reference information of the tunes is not sufficiently effective. We need new methods and approaches to realize emotion-based search and tune content analysis. This paper proposes a new music-tune analysis approach to realize automatic emotion recognition by means of essential musical features. The innovativeness of this research is that it uses new musical features for tune's analysis, which are based on human's perception of the music. Most important distinction of the proposed approach is that it includes broader range of tunes genres, which is very significant for music emotion recognition system. Emotion description on continuous plane instead of categories results in more supported adjectives for emotion description which is also a great advantage.

本文言語English
ホスト出版物のタイトルInformation Modelling and Knowledge Bases XXVIII
出版社IOS Press
ページ283-302
ページ数20
292
ISBN(電子版)9781614997191
DOI
出版ステータスPublished - 2017

出版物シリーズ

名前Frontiers in Artificial Intelligence and Applications
292
ISSN(印刷版)09226389

ASJC Scopus subject areas

  • Artificial Intelligence

フィンガープリント 「Emotions recognition system for acoustic music data based on human perception features」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル