Extended Reproduction of Demonstration Motion Using Variational Autoencoder

Daisuke Takahashi, Seiichiro Katsura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.

Original languageEnglish
Title of host publicationProceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1057-1062
Number of pages6
Volume2018-June
ISBN (Print)9781538637050
DOIs
Publication statusPublished - 2018 Aug 10
Event27th IEEE International Symposium on Industrial Electronics, ISIE 2018 - Cairns, Australia
Duration: 2018 Jun 132018 Jun 15

Other

Other27th IEEE International Symposium on Industrial Electronics, ISIE 2018
CountryAustralia
CityCairns
Period18/6/1318/6/15

Fingerprint

Demonstrations
End effectors
Cost functions
Manipulators
Decoding
Time series
Robots

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Control and Systems Engineering

Cite this

Takahashi, D., & Katsura, S. (2018). Extended Reproduction of Demonstration Motion Using Variational Autoencoder. In Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018 (Vol. 2018-June, pp. 1057-1062). [8433683] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ISIE.2018.8433683

Extended Reproduction of Demonstration Motion Using Variational Autoencoder. / Takahashi, Daisuke; Katsura, Seiichiro.

Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018. Vol. 2018-June Institute of Electrical and Electronics Engineers Inc., 2018. p. 1057-1062 8433683.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Takahashi, D & Katsura, S 2018, Extended Reproduction of Demonstration Motion Using Variational Autoencoder. in Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018. vol. 2018-June, 8433683, Institute of Electrical and Electronics Engineers Inc., pp. 1057-1062, 27th IEEE International Symposium on Industrial Electronics, ISIE 2018, Cairns, Australia, 18/6/13. https://doi.org/10.1109/ISIE.2018.8433683
Takahashi D, Katsura S. Extended Reproduction of Demonstration Motion Using Variational Autoencoder. In Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018. Vol. 2018-June. Institute of Electrical and Electronics Engineers Inc. 2018. p. 1057-1062. 8433683 https://doi.org/10.1109/ISIE.2018.8433683
Takahashi, Daisuke ; Katsura, Seiichiro. / Extended Reproduction of Demonstration Motion Using Variational Autoencoder. Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018. Vol. 2018-June Institute of Electrical and Electronics Engineers Inc., 2018. pp. 1057-1062
@inproceedings{b6a3ad45ca0444c480f79c2e8a4a4635,
title = "Extended Reproduction of Demonstration Motion Using Variational Autoencoder",
abstract = "Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.",
author = "Daisuke Takahashi and Seiichiro Katsura",
year = "2018",
month = "8",
day = "10",
doi = "10.1109/ISIE.2018.8433683",
language = "English",
isbn = "9781538637050",
volume = "2018-June",
pages = "1057--1062",
booktitle = "Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Extended Reproduction of Demonstration Motion Using Variational Autoencoder

AU - Takahashi, Daisuke

AU - Katsura, Seiichiro

PY - 2018/8/10

Y1 - 2018/8/10

N2 - Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.

AB - Learning from demonstration (LfD) is an effective method for robot motion learning because hand-coded cost function is not necessary. However, the number of times demonstrations can be performed is limited and performing a demonstration in every environmental condition is difficult. Therefore, an algorithm for generating a motion data not obtained by demonstrations is required. In order to deal with this problem, this research generates motion latent space by abstracting the demonstration data. Motion latent space is a space expressing the demonstration motion in lower dimensions. Also the demonstration data can be extended by decoding the points in the latent space. These things are realized by applying variational autoencoder (VAE) used in the field of image generation to time-series data. Demonstrations of the reaching task are conducted, and the paper shows that the manipulator can reach the object even when the object is located at a different position from demonstrations.

UR - http://www.scopus.com/inward/record.url?scp=85052405604&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85052405604&partnerID=8YFLogxK

U2 - 10.1109/ISIE.2018.8433683

DO - 10.1109/ISIE.2018.8433683

M3 - Conference contribution

SN - 9781538637050

VL - 2018-June

SP - 1057

EP - 1062

BT - Proceedings - 2018 IEEE 27th International Symposium on Industrial Electronics, ISIE 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -