Continuous-time value function approximation in reproducing kernel hilbert spaces

Motoya Ohnishi, Mikael Johansson, Masahiro Yukawa, Masashi Sugiyama

研究成果: Conference article査読

2 被引用数 (Scopus)

抄録

Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.

本文言語English
ページ(範囲)2813-2824
ページ数12
ジャーナルAdvances in Neural Information Processing Systems
2018-December
出版ステータスPublished - 2018
イベント32nd Conference on Neural Information Processing Systems, NeurIPS 2018 - Montreal, Canada
継続期間: 2018 12 22018 12 8

ASJC Scopus subject areas

  • コンピュータ ネットワークおよび通信
  • 情報システム
  • 信号処理

フィンガープリント

「Continuous-time value function approximation in reproducing kernel hilbert spaces」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル