Prioritized contig combining to segregate voices in polyphonic music

Asako Ishigaki, Masaki Matsubara, Hiroaki Saito

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of "combining" step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21% of average voice consistency.

Original languageEnglish
Title of host publicationProceedings of the 8th Sound and Music Computing Conference, SMC 2011
PublisherSound and music Computing network
Publication statusPublished - 2011
Event8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy
Duration: 2011 Jul 62011 Jul 9

Other

Other8th Sound and Music Computing Conference, SMC 2011
CountryItaly
CityPadova
Period11/7/611/7/9

Fingerprint

Transcription
Information retrieval

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Ishigaki, A., Matsubara, M., & Saito, H. (2011). Prioritized contig combining to segregate voices in polyphonic music. In Proceedings of the 8th Sound and Music Computing Conference, SMC 2011 Sound and music Computing network.

Prioritized contig combining to segregate voices in polyphonic music. / Ishigaki, Asako; Matsubara, Masaki; Saito, Hiroaki.

Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 2011.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ishigaki, A, Matsubara, M & Saito, H 2011, Prioritized contig combining to segregate voices in polyphonic music. in Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 8th Sound and Music Computing Conference, SMC 2011, Padova, Italy, 11/7/6.
Ishigaki A, Matsubara M, Saito H. Prioritized contig combining to segregate voices in polyphonic music. In Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network. 2011
Ishigaki, Asako ; Matsubara, Masaki ; Saito, Hiroaki. / Prioritized contig combining to segregate voices in polyphonic music. Proceedings of the 8th Sound and Music Computing Conference, SMC 2011. Sound and music Computing network, 2011.
@inproceedings{832801ff6cc84afb8161338dfbfef7b4,
title = "Prioritized contig combining to segregate voices in polyphonic music",
abstract = "Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of {"}combining{"} step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21{\%} of average voice consistency.",
author = "Asako Ishigaki and Masaki Matsubara and Hiroaki Saito",
year = "2011",
language = "English",
booktitle = "Proceedings of the 8th Sound and Music Computing Conference, SMC 2011",
publisher = "Sound and music Computing network",

}

TY - GEN

T1 - Prioritized contig combining to segregate voices in polyphonic music

AU - Ishigaki, Asako

AU - Matsubara, Masaki

AU - Saito, Hiroaki

PY - 2011

Y1 - 2011

N2 - Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of "combining" step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21% of average voice consistency.

AB - Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of "combining" step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21% of average voice consistency.

UR - http://www.scopus.com/inward/record.url?scp=84905157978&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905157978&partnerID=8YFLogxK

M3 - Conference contribution

BT - Proceedings of the 8th Sound and Music Computing Conference, SMC 2011

PB - Sound and music Computing network

ER -