R-learning with multiple state-action value tables

Koichiro Ishikawa, Akito Sakurai, Tsutomu Fujinami, Susumu Kunifuji

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a method to improve the performance of R-learning, a reinforcement learning algorithm, by using multiple state-action value tables. Unlike Q- or Sarsa learning, R-learning learns a policy to maximize undiscounted rewards. Multiple state-action value tables cause substantial explorations as needed and make R-learning work well. Efficiency of the proposed method is verified through experiments in a simulated environment.

Original languageEnglish
Pages (from-to)34-47
Number of pages14
JournalElectrical Engineering in Japan (English translation of Denki Gakkai Ronbunshi)
Volume159
Issue number3
DOIs
Publication statusPublished - 2007 May
Externally publishedYes

Keywords

  • Autonomous mobile robot
  • R-learning
  • Reinforcement learning

ASJC Scopus subject areas

  • Energy Engineering and Power Technology
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'R-learning with multiple state-action value tables'. Together they form a unique fingerprint.

Cite this