TY - GEN
T1 - Acquisition of viewpoint representation in imitative learning from own sensory-motor experiences
AU - Nakajo, Ryoichi
AU - Murata, Shingo
AU - Arie, Hiroaki
AU - Ogata, Tetsuya
PY - 2015/12/2
Y1 - 2015/12/2
N2 - This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.
AB - This paper introduces an imitative model that enables a robot to acquire viewpoints of the self and others from its own sensory-motor experiences. This is important for recognizing and imitating actions generated from various directions. Existing methods require coordinate transformations input by human designers or complex learning modules to acquire a viewpoint. In the proposed model, several neurons dedicated to generated actions and viewpoints of the self and others are added to a dynamic nueral network model reffered as continuous time recurrent neural network (CTRNN). The training data are labeled with types of actions and viewpoints, and are linked to each internal state. We implemented this model in a robot and trained the model to perform actions of object manipulation. Representations of behavior and viewpoint were formed in the internal states of the CTRNN. In addition, we analyzed the initial values of the internal states that represent the viewpoint information. We confirmed the distinction of the observational perspective of other's actions self-organized in the space of the initial values. Combining the initial values of the internal states that describe the behavior and the viewpoint, the system can generate unlearned data.
UR - http://www.scopus.com/inward/record.url?scp=84962148493&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84962148493&partnerID=8YFLogxK
U2 - 10.1109/DEVLRN.2015.7346166
DO - 10.1109/DEVLRN.2015.7346166
M3 - Conference contribution
AN - SCOPUS:84962148493
T3 - 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015
SP - 326
EP - 331
BT - 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th Joint International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob 2015
Y2 - 13 August 2015 through 16 August 2015
ER -