A memory gradient method without line search for unconstrained optimization

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

Original languageEnglish
Pages (from-to)191-206
Number of pages16
JournalSUT Journal of Mathematics
Volume42
Issue number2
Publication statusPublished - 2006 Dec 1
Externally publishedYes

Fingerprint

Line Search
Gradient Method
Unconstrained Optimization
Sun
Large-scale Problems
Search Strategy
Conjugate Gradient Method
Descent
Global Convergence
Objective function
Optimization Problem
Converge
Iteration

Keywords

  • Global convergence
  • Large scale problems
  • Memory gradient method
  • Nonlinear programming
  • Optimization

ASJC Scopus subject areas

  • Mathematics(all)

Cite this

A memory gradient method without line search for unconstrained optimization. / Narushima, Yasushi.

In: SUT Journal of Mathematics, Vol. 42, No. 2, 01.12.2006, p. 191-206.

Research output: Contribution to journalArticle

@article{2ff9bbb21e194a7ca4b3bb14e39ca4ed,
title = "A memory gradient method without line search for unconstrained optimization",
abstract = "Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.",
keywords = "Global convergence, Large scale problems, Memory gradient method, Nonlinear programming, Optimization",
author = "Yasushi Narushima",
year = "2006",
month = "12",
day = "1",
language = "English",
volume = "42",
pages = "191--206",
journal = "SUT Journal of Mathematics",
issn = "0916-5746",
publisher = "Science University of Tokyo",
number = "2",

}

TY - JOUR

T1 - A memory gradient method without line search for unconstrained optimization

AU - Narushima, Yasushi

PY - 2006/12/1

Y1 - 2006/12/1

N2 - Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

AB - Memory gradient methods are used for unconstrained optimization, especially large scale problems. The first idea of memory gradient methods was proposed by Miele and Cantrell (1969) and subsequently extended by Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. On the other hand, Sun and Zhang (2001) proposed a particular choice of step size, and they applied it to the conjugate gradient method. In this paper, we apply the choice of the step size proposed by Sun and Zhang to the memory gradient method proposed by Narushima and Yabe and establish its global convergence.

KW - Global convergence

KW - Large scale problems

KW - Memory gradient method

KW - Nonlinear programming

KW - Optimization

UR - http://www.scopus.com/inward/record.url?scp=84857777759&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84857777759&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:84857777759

VL - 42

SP - 191

EP - 206

JO - SUT Journal of Mathematics

JF - SUT Journal of Mathematics

SN - 0916-5746

IS - 2

ER -