Abstract
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.
Original language | English |
---|---|
Pages (from-to) | 325-346 |
Number of pages | 22 |
Journal | Computational Optimization and Applications |
Volume | 35 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2006 Nov |
Externally published | Yes |
Keywords
- Descent search direction
- Global convergence
- Memory gradient method
- Unconstrained optimization
- Wolfe conditions
ASJC Scopus subject areas
- Control and Optimization
- Computational Mathematics
- Applied Mathematics