In the design of multimedia database systems, one of the most important issues is how to deal with "Kansei" and "impression" in human beings. The concept of "Kansei" and "impression" includes several meanings on sensitive recognition, such as human senses, feelings, sensitivity and psychological reactions. In this paper, we propose an automatic metadata-generation method for extracting impressions of music, such as "agitated," "joyous," "lyrical," "melancholy," and "sentimental," for semantically retrieving music data according to human's impression. We also present an impression-metadata-generation mechanism for reflecting impression transition occurring as time passes, that is, as temporal transition of a story in music (music-story). This mechanism is used to compute the impression-strength reflecting the impression transition, that is, "impression-stream" as a temporal transition of a music-story. Our automatic metadata-generation for a music-story consists of the following processes: (1) Division of a music-story into sections (2) Impression-metadata extraction for each section (3) Computation of impression-strength of impression-metadata (4) Weighting impression-metadata according to impression-strength (5) Combining impression-metadata for adjusting themselves to a query structure. Music data with a story consists of several sections, and each section gives an individual impression. The combination of sections gives a global impression of music data. Our metadata-generation method computes correlations between music data and impression words by reflecting the degree of changes of impressions among continuous sections. This paper shows several experimental results of metadata generation to clarify the feasibility and effectiveness of our method.