Abstract
Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.
Original language | English |
---|---|
Pages (from-to) | 31-45 |
Number of pages | 15 |
Journal | Journal of the Operations Research Society of Japan |
Volume | 50 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2007 Jan 1 |
Externally published | Yes |
Fingerprint
Keywords
- Global convergence
- Large scale problems
- Memory gradient method
- Nonlinear programming
- Nonmonotone line search
- Optimization
ASJC Scopus subject areas
- Decision Sciences(all)
- Management Science and Operations Research
Cite this
A nonmonotone memory gradient method for unconstrained optimization. / Narushima, Yasushi.
In: Journal of the Operations Research Society of Japan, Vol. 50, No. 1, 01.01.2007, p. 31-45.Research output: Contribution to journal › Article
}
TY - JOUR
T1 - A nonmonotone memory gradient method for unconstrained optimization
AU - Narushima, Yasushi
PY - 2007/1/1
Y1 - 2007/1/1
N2 - Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.
AB - Memory gradient methods are used for unconstrained optimization, especially large scale problems. They were first proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). Recently Narushima and Yabe (2006) proposed a new memory gradient method which generates a descent search direction for the objective function at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we propose a nonmonotone memory gradient method based on this work. We show that our method converges globally to the solution. Our numerical results show that the proposed method is efficient for some standard test problems if we choose a parameter included in the method suitably.
KW - Global convergence
KW - Large scale problems
KW - Memory gradient method
KW - Nonlinear programming
KW - Nonmonotone line search
KW - Optimization
UR - http://www.scopus.com/inward/record.url?scp=34248638153&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=34248638153&partnerID=8YFLogxK
U2 - 10.15807/jorsj.50.31
DO - 10.15807/jorsj.50.31
M3 - Article
AN - SCOPUS:34248638153
VL - 50
SP - 31
EP - 45
JO - Journal of the Operations Research Society of Japan
JF - Journal of the Operations Research Society of Japan
SN - 0453-4514
IS - 1
ER -