Abstract
We consider infinite-horizon deterministic dynamic programming problems in discrete time. We show that the value function of such a problem is always a fixed point of a modified version of the Bellman operator. We also show that value iteration converges increasingly to the value function if the initial function is dominated by the value function, is mapped upward by the modified Bellman operator and satisfies a transversality-like condition. These results require no assumption except for the general framework of infinite-horizon deterministic dynamic programming. As an application, we show that the value function can be approximated by computing the value function of an unconstrained version of the problem with the constraint replaced by a penalty function.
Original language | English |
---|---|
Pages (from-to) | 1899-1908 |
Number of pages | 10 |
Journal | Optimization |
Volume | 65 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2016 Oct 2 |
Keywords
- Bellman operator
- Dynamic programming
- fixed point
- penalty method
- value iteration
ASJC Scopus subject areas
- Control and Optimization
- Management Science and Operations Research
- Applied Mathematics