TY - JOUR
T1 - Continuous-time value function approximation in reproducing kernel hilbert spaces
AU - Ohnishi, Motoya
AU - Johansson, Mikael
AU - Yukawa, Masahiro
AU - Sugiyama, Masashi
N1 - Funding Information:
This work was partially conducted when M. Ohnishi was at the GRITS Lab, Georgia Institute of Technology; M. Ohnishi thanks the members of the GRITS Lab, including Dr. Li Wang, and Prof. Magnus Egerstedt for discussions regarding barrier functions. M. Yukawa was supported in part by KAKENHI 18H01446 and 15H02757, M. Johansson was supported in part by the Swedish Research Council and by the Knut and Allice Wallenberg Foundation, and M. Sugiyama was supported in part by KAKENHI 17H00757. Lastly, the authors thank all of the anonymous reviewers for their very insightful comments.
Publisher Copyright:
© 2018 Curran Associates Inc.All rights reserved.
PY - 2018
Y1 - 2018
N2 - Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
AB - Motivated by the success of reinforcement learning (RL) for discrete-time tasks such as AlphaGo and Atari games, there has been a recent surge of interest in using RL for continuous-time control of physical systems (cf. many challenging tasks in OpenAI Gym and DeepMind Control Suite). Since discretization of time is susceptible to error, it is methodologically more desirable to handle the system dynamics directly in continuous time. However, very few techniques exist for continuous-time RL and they lack flexibility in value function approximation. In this paper, we propose a novel framework for model-based continuous-time value function approximation in reproducing kernel Hilbert spaces. The resulting framework is so flexible that it can accommodate any kind of kernel-based approach, such as Gaussian processes and kernel adaptive filters, and it allows us to handle uncertainties and nonstationarity without prior knowledge about the environment or what basis functions to employ. We demonstrate the validity of the presented framework through experiments.
UR - http://www.scopus.com/inward/record.url?scp=85064833306&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85064833306&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85064833306
SN - 1049-5258
VL - 2018-December
SP - 2813
EP - 2824
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 32nd Conference on Neural Information Processing Systems, NeurIPS 2018
Y2 - 2 December 2018 through 8 December 2018
ER -