Autonomous self-explanation of behavior for interactive reinforcement learning agents

Yosuke Fukuchi, Masahiko Osawa, Hiroshi Yamakawa, Michita Imai

Research output: Contribution to journalArticlepeer-review

Abstract

In cooperation, the workers must know how co-workers behave. However, an agent's policy, which is embedded in a statistical machine learning model, is hard to understand, and requires much time and knowledge to comprehend. Therefore, it is difficult for people to predict the behavior of machine learning robots, which makes Human Robot Cooperation challenging. In this paper, we propose Instruction-based Behavior Explanation (IBE), a method to explain an autonomous agent's future behavior. In IBE, an agent can autonomously acquire the expressions to explain its own behavior by reusing the instructions given by a human expert to accelerate the learning of the agent's policy. IBE also enables a developmental agent, whose policy may change during the cooperation, to explain its own behavior with sufficient time granularity.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2018 Oct 20

Keywords

  • Human Robot Cooperation
  • Instruction-based Behavior Explanation
  • Interactive Reinforcement Learning

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Autonomous self-explanation of behavior for interactive reinforcement learning agents'. Together they form a unique fingerprint.

Cite this