Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of "combining" step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21% of average voice consistency.
|Publication status||Published - 2011 Jan 1|
|Event||8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy|
Duration: 2011 Jul 6 → 2011 Jul 9
|Other||8th Sound and Music Computing Conference, SMC 2011|
|Period||11/7/6 → 11/7/9|
ASJC Scopus subject areas
- Computer Science(all)