Global convergence of a memory gradient method for unconstrained optimization

Yasushi Narushima, Hiroshi Yabe

研究成果: Article査読

21 被引用数 (Scopus)

抄録

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally to the solution if the Wolfe conditions are satisfied within the framework of the line search strategy. Our numerical results show that the proposed method is efficient for given standard test problems if we choose a good parameter included in the method.

本文言語English
ページ(範囲)325-346
ページ数22
ジャーナルComputational Optimization and Applications
35
3
DOI
出版ステータスPublished - 2006 11
外部発表はい

ASJC Scopus subject areas

  • 制御と最適化
  • 計算数学
  • 応用数学

フィンガープリント

「Global convergence of a memory gradient method for unconstrained optimization」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル