A nonmonotone memory gradient method for unconstrained optimization

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.

Original languageEnglish
Pages (from-to)31-45
Number of pages15
JournalJournal of the Operations Research Society of Japan
Volume50
Issue number1
DOIs
Publication statusPublished - 2007 Jan 1
Externally publishedYes

Fingerprint

Gradient
Search strategy
Objective function
Large-scale optimization

Keywords

  • Global convergence
  • Large scale problems
  • Memory gradient method
  • Nonlinear programming
  • Nonmonotone line search
  • Optimization

ASJC Scopus subject areas

  • Decision Sciences(all)
  • Management Science and Operations Research

Cite this

A nonmonotone memory gradient method for unconstrained optimization. / Narushima, Yasushi.

In: Journal of the Operations Research Society of Japan, Vol. 50, No. 1, 01.01.2007, p. 31-45.

Research output: Contribution to journalArticle

@article{a46a0e14096f46cbb14eea0bc513a188,
title = "A nonmonotone memory gradient method for unconstrained optimization",
abstract = "Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.",
keywords = "Global convergence, Large scale problems, Memory gradient method, Nonlinear programming, Nonmonotone line search, Optimization",
author = "Yasushi Narushima",
year = "2007",
month = "1",
day = "1",
doi = "10.15807/jorsj.50.31",
language = "English",
volume = "50",
pages = "31--45",
journal = "Journal of the Operations Research Society of Japan",
issn = "0453-4514",
publisher = "Operations Research Society of Japan",
number = "1",

}

TY - JOUR

T1 - A nonmonotone memory gradient method for unconstrained optimization

AU - Narushima, Yasushi

PY - 2007/1/1

Y1 - 2007/1/1

N2 - Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.

AB - Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.

KW - Global convergence

KW - Large scale problems

KW - Memory gradient method

KW - Nonlinear programming

KW - Nonmonotone line search

KW - Optimization

UR - http://www.scopus.com/inward/record.url?scp=34248638153&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=34248638153&partnerID=8YFLogxK

U2 - 10.15807/jorsj.50.31

DO - 10.15807/jorsj.50.31

M3 - Article

AN - SCOPUS:34248638153

VL - 50

SP - 31

EP - 45

JO - Journal of the Operations Research Society of Japan

JF - Journal of the Operations Research Society of Japan

SN - 0453-4514

IS - 1

ER -