A memory gradient method without line search for unconstrained optimization

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

Original languageEnglish
Pages (from-to)191-206
Number of pages16
JournalSUT Journal of Mathematics
Volume42
Issue number2
Publication statusPublished - 2006 Dec 1
Externally publishedYes

Keywords

  • Global convergence
  • Large scale problems
  • Memory gradient method
  • Nonlinear programming
  • Optimization

ASJC Scopus subject areas

  • Mathematics(all)

Fingerprint

Dive into the research topics of 'A memory gradient method without line search for unconstrained optimization'. Together they form a unique fingerprint.

Cite this