Global convergence of a memory gradient method for unconstrained optimization

Yasushi Narushima, Hiroshi Yabe

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.

Original languageEnglish
Pages (from-to)325-346
Number of pages22
JournalComputational Optimization and Applications
Volume35
Issue number3
DOIs
Publication statusPublished - 2006 Nov
Externally publishedYes

Keywords

  • Descent search direction
  • Global convergence
  • Memory gradient method
  • Unconstrained optimization
  • Wolfe conditions

ASJC Scopus subject areas

  • Control and Optimization
  • Computational Mathematics
  • Applied Mathematics

Fingerprint Dive into the research topics of 'Global convergence of a memory gradient method for unconstrained optimization'. Together they form a unique fingerprint.

Cite this