抄録
The purpose of this paper is to optimize the structure of hierarchical neural networks. In this paper, structure optimization is used to represent a neural network by the minimum number of nodes and connections, and is performed by eliminating unnecessary connections from a trained neural network by means of a genetic algorithm. We focus on a neural network specialized for image recognition problems. The flow of the proposed method is as follows. First, the Walsh-Hadamard transform is applied to images for feature extraction. Second, the neural network is trained with the extracted features based on a back-propagation algorithm. After neural network training, unnecessary connections are eliminated from the trained neural network by means of a genetic algorithm. Finally, the neural network is retrained to recover from the degradation caused by connection elimination. In order to validate the usefulness of the proposed method, face recognition and texture classification examples are used. The experimental results indicate that a compact neural network was generated, maintaining the generalization performance by the proposed method.
本文言語 | English |
---|---|
ページ(範囲) | 28-36 |
ページ数 | 9 |
ジャーナル | Electronics and Communications in Japan |
巻 | 95 |
号 | 3 |
DOI | |
出版ステータス | Published - 2012 3月 |
ASJC Scopus subject areas
- 信号処理
- 物理学および天文学(全般)
- コンピュータ ネットワークおよび通信
- 電子工学および電気工学
- 応用数学