Abstract
Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.
Original language | English |
---|---|
Pages (from-to) | 191-206 |
Number of pages | 16 |
Journal | SUT Journal of Mathematics |
Volume | 42 |
Issue number | 2 |
Publication status | Published - 2006 Dec 1 |
Externally published | Yes |
Keywords
- Global convergence
- Large scale problems
- Memory gradient method
- Nonlinear programming
- Optimization
ASJC Scopus subject areas
- Mathematics(all)