抄録
We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function of such a problem is always a fixed point of a modified version of the Bellman operator. We also show that value iteration converges increasingly to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator and satisfies a transversality-like condition. These results require no assumption except for the general framework of infinite-horizon deterministic dynamic programming. As an application, we show that the value function can be approximated by computing the value function of an unconstrained version of the problem with the constraint replaced by a penalty function.
本文言語 | English |
---|---|
ページ(範囲) | 1899-1908 |
ページ数 | 10 |
ジャーナル | Optimization |
巻 | 65 |
号 | 10 |
DOI | |
出版ステータス | Published - 2016 10月 2 |
ASJC Scopus subject areas
- 制御と最適化
- 経営科学およびオペレーションズ リサーチ
- 応用数学