Abstract
This paper proposes a new backpropagation‐type learning algorithm which incorporates selection capability in the hidden‐layer units. This algorithm is simple and is effective in reducing both the number of training cycles required and the number of hidden‐layer units. In the proposed algorithm, the method consists of selecting the “worst” units from among those in the hidden layer and eliminating them. Before convergence is achieved, by resetting the connection weights of the selected “bad” units to small random values, an escape from a local minimum is effected and learning time is shortened. When the network is converging, by preferentially deleting units starting with the “worst,” a reduction in the number of units on the hidden layer is achieved. The reduction of the number of units on the hidden layer increases the generalization capability of the network and contributes to a reduction in the computation costs and the like. Through a computer simulation, the superior performance of the proposed algorithm is demonstrated.
Original language | English |
---|---|
Pages (from-to) | 46-54 |
Number of pages | 9 |
Journal | Systems and Computers in Japan |
Volume | 23 |
Issue number | 8 |
DOIs | |
Publication status | Published - 1992 |
Keywords
- Neural network
- acceleration of convergence
- backpropagation
- elimination of hidden units
ASJC Scopus subject areas
- Theoretical Computer Science
- Information Systems
- Hardware and Architecture
- Computational Theory and Mathematics