Infinite-horizon deterministic dynamic programming in discrete time: a monotone convergence principle and a penalty method

Takashi Kamihigashi, Masayuki Yao

Research output: Contribution to journalArticlepeer-review

Abstract

We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function of such a problem is always a fixed point of a modified version of the Bellman operator. We also show that value iteration converges increasingly to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator and satisfies a transversality-like condition. These results require no assumption except for the general framework of infinite-horizon deterministic dynamic programming. As an application, we show that the value function can be approximated by computing the value function of an unconstrained version of the problem with the constraint replaced by a penalty function.

Original languageEnglish
Pages (from-to)1899-1908
Number of pages10
JournalOptimization
Volume65
Issue number10
DOIs
Publication statusPublished - 2016 Oct 2

Keywords

  • Bellman operator
  • Dynamic programming
  • fixed point
  • penalty method
  • value iteration

ASJC Scopus subject areas

  • Control and Optimization
  • Management Science and Operations Research
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Infinite-horizon deterministic dynamic programming in discrete time: a monotone convergence principle and a penalty method'. Together they form a unique fingerprint.

Cite this