A generative model of calligraphy based on image and human motion

Ryotaro Kobayashi, Seiichiro Katsura

Research output: Contribution to journalArticlepeer-review

Abstract

The use of robots is being widely considered to ensure the inheritance of calligraphy, which is a cultural art peculiar to China and Japan. Many researchers have attempted to generate motion command values to allow robots to inherit calligrapher skills. In this paper, we constructed a model that generates motion command values considering time-series positions and forces from human motion data and character images. However, since it is difficult to acquire a large amount of motion data due to physical restrictions, we compared the accuracy based on the number of training data. The performance of the model was evaluated with motion data generated using the image data for evaluation. We also showed that calligraphy is a delicate task and that using force data can improve results.

Original languageEnglish
Pages (from-to)340-348
Number of pages9
JournalPrecision Engineering
Volume77
DOIs
Publication statusPublished - 2022 Sep

Keywords

  • Deep learning
  • Force control
  • Motion control

ASJC Scopus subject areas

  • Engineering(all)

Fingerprint

Dive into the research topics of 'A generative model of calligraphy based on image and human motion'. Together they form a unique fingerprint.

Cite this