Music compositional intelligence with an affective flavor

Roberto Legaspi, Yuya Hashimoto, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

26 Citations (Scopus)

Abstract

The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.

Original languageEnglish
Title of host publicationIUI 2007
Subtitle of host publication2007 International Conference on Intelligent User Interfaces
Pages216-224
Number of pages9
DOIs
Publication statusPublished - 2007 Sep 28
Externally publishedYes
Event12th International Conference on Intelligent User Interfaces, IUI 2007 - Honolulu, HI, United States
Duration: 2007 Jan 282007 Jan 31

Other

Other12th International Conference on Intelligent User Interfaces, IUI 2007
CountryUnited States
CityHonolulu, HI
Period07/1/2807/1/31

Fingerprint

Flavors
Inductive logic programming (ILP)
Labels
Genetic algorithms
Semantics

Keywords

  • Adaptive user interface
  • Affective computing
  • Automated reasoning
  • User modeling

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction

Cite this

Legaspi, R., Hashimoto, Y., Moriyama, K., Kurihara, S., & Numao, M. (2007). Music compositional intelligence with an affective flavor. In IUI 2007: 2007 International Conference on Intelligent User Interfaces (pp. 216-224) https://doi.org/10.1145/1216295.1216335

Music compositional intelligence with an affective flavor. / Legaspi, Roberto; Hashimoto, Yuya; Moriyama, Koichi; Kurihara, Satoshi; Numao, Masayuki.

IUI 2007: 2007 International Conference on Intelligent User Interfaces. 2007. p. 216-224.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Legaspi, R, Hashimoto, Y, Moriyama, K, Kurihara, S & Numao, M 2007, Music compositional intelligence with an affective flavor. in IUI 2007: 2007 International Conference on Intelligent User Interfaces. pp. 216-224, 12th International Conference on Intelligent User Interfaces, IUI 2007, Honolulu, HI, United States, 07/1/28. https://doi.org/10.1145/1216295.1216335
Legaspi R, Hashimoto Y, Moriyama K, Kurihara S, Numao M. Music compositional intelligence with an affective flavor. In IUI 2007: 2007 International Conference on Intelligent User Interfaces. 2007. p. 216-224 https://doi.org/10.1145/1216295.1216335
Legaspi, Roberto ; Hashimoto, Yuya ; Moriyama, Koichi ; Kurihara, Satoshi ; Numao, Masayuki. / Music compositional intelligence with an affective flavor. IUI 2007: 2007 International Conference on Intelligent User Interfaces. 2007. pp. 216-224
@inproceedings{bcd5f8b780dc4648a6d8fdbebf671eb7,
title = "Music compositional intelligence with an affective flavor",
abstract = "The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6{\%} accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.",
keywords = "Adaptive user interface, Affective computing, Automated reasoning, User modeling",
author = "Roberto Legaspi and Yuya Hashimoto and Koichi Moriyama and Satoshi Kurihara and Masayuki Numao",
year = "2007",
month = "9",
day = "28",
doi = "10.1145/1216295.1216335",
language = "English",
isbn = "1595934812",
pages = "216--224",
booktitle = "IUI 2007",

}

TY - GEN

T1 - Music compositional intelligence with an affective flavor

AU - Legaspi, Roberto

AU - Hashimoto, Yuya

AU - Moriyama, Koichi

AU - Kurihara, Satoshi

AU - Numao, Masayuki

PY - 2007/9/28

Y1 - 2007/9/28

N2 - The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.

AB - The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.

KW - Adaptive user interface

KW - Affective computing

KW - Automated reasoning

KW - User modeling

UR - http://www.scopus.com/inward/record.url?scp=34648837228&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34648837228&partnerID=8YFLogxK

U2 - 10.1145/1216295.1216335

DO - 10.1145/1216295.1216335

M3 - Conference contribution

SN - 1595934812

SN - 9781595934819

SP - 216

EP - 224

BT - IUI 2007

ER -