A memory gradient method without line search for unconstrained optimization

研究成果: Article査読

3 被引用数 (Scopus)

抄録

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

本文言語English
ページ(範囲)191-206
ページ数16
ジャーナルSUT Journal of Mathematics
42
2
出版ステータスPublished - 2006 12月 1
外部発表はい

ASJC Scopus subject areas

  • 数学 (全般)

フィンガープリント

「A memory gradient method without line search for unconstrained optimization」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル