We propose a sentence generation method that describes images. We do not use image processing technique in our proposed method. Human annotated image tags are used as image information to generate sentence. By using human annotated tags, we think this enables to describe image more relevant and user specific. Our method uses Kyoto University's case frame data and Google N-gram to generate candidate sentences. We extend these candidates to describe images more relevant. To be more precise, we added segments with missing semantic role, and added modification segments. To select one output sentence, we used fuzzy rules to grade naturalness of candidate sentences. To grading image relevance of the sentence, we scored word similarity for each word. The performance of the proposed system has been evaluated by subjective experiments and obtained satisfactory results.