Improvement algorithm for approximate incremental learning

Tadahiro Oyama, H. Kipsang Choge, Stephen Karungaru, Satoru Tsuge, Yasue Mitsukura, Minoru Fukumi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

This paper presents an improved algorithm of Incremental Simple-PCA. The Incremental Simple-PCA is a fast incremental learning algorithm based on Simple-PCA. This algorithm need not hold all training samples because it enables update of an eigenvector according to incremental samples. Moreover, this algorithm has an advantage that it can calculate the eigenvector at high-speed because matrix calculation is not needed. However, it had a problem in convergence performance of the eigenvector. Thus, in this paper, we try the improvement of this algorithm from the aspect of convergence performance. We performed computer simulations using UCI datasets to verify the effectiveness of the proposed algorithm. As a result, its availability was confirmed from the standpoint of recognition accuracy and convergence performance of the eigenvector compared with the Incremental Simple-PCA.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages520-529
Number of pages10
Volume5863 LNCS
EditionPART 1
DOIs
Publication statusPublished - 2009
Externally publishedYes
Event16th International Conference on Neural Information Processing, ICONIP 2009 - Bangkok, Thailand
Duration: 2009 Dec 12009 Dec 5

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
NumberPART 1
Volume5863 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other16th International Conference on Neural Information Processing, ICONIP 2009
CountryThailand
CityBangkok
Period09/12/109/12/5

Fingerprint

Incremental Learning
Eigenvalues and eigenfunctions
Eigenvector
Incremental Algorithm
Training Samples
Learning algorithms
Learning Algorithm
High Speed
Computer Simulation
Availability
Update
Verify
Calculate
Computer simulation

Keywords

  • Cincremental learning Cdimensional reduction Cpattern recognition
  • PCA
  • Simple-PCA

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Oyama, T., Choge, H. K., Karungaru, S., Tsuge, S., Mitsukura, Y., & Fukumi, M. (2009). Improvement algorithm for approximate incremental learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (PART 1 ed., Vol. 5863 LNCS, pp. 520-529). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 5863 LNCS, No. PART 1). https://doi.org/10.1007/978-3-642-10677-4_59

Improvement algorithm for approximate incremental learning. / Oyama, Tadahiro; Choge, H. Kipsang; Karungaru, Stephen; Tsuge, Satoru; Mitsukura, Yasue; Fukumi, Minoru.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 5863 LNCS PART 1. ed. 2009. p. 520-529 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 5863 LNCS, No. PART 1).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Oyama, T, Choge, HK, Karungaru, S, Tsuge, S, Mitsukura, Y & Fukumi, M 2009, Improvement algorithm for approximate incremental learning. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). PART 1 edn, vol. 5863 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), no. PART 1, vol. 5863 LNCS, pp. 520-529, 16th International Conference on Neural Information Processing, ICONIP 2009, Bangkok, Thailand, 09/12/1. https://doi.org/10.1007/978-3-642-10677-4_59
Oyama T, Choge HK, Karungaru S, Tsuge S, Mitsukura Y, Fukumi M. Improvement algorithm for approximate incremental learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). PART 1 ed. Vol. 5863 LNCS. 2009. p. 520-529. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1). https://doi.org/10.1007/978-3-642-10677-4_59
Oyama, Tadahiro ; Choge, H. Kipsang ; Karungaru, Stephen ; Tsuge, Satoru ; Mitsukura, Yasue ; Fukumi, Minoru. / Improvement algorithm for approximate incremental learning. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 5863 LNCS PART 1. ed. 2009. pp. 520-529 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); PART 1).
@inproceedings{0deb6374a89148a4aa6c0a73d116ae6f,
title = "Improvement algorithm for approximate incremental learning",
abstract = "This paper presents an improved algorithm of Incremental Simple-PCA. The Incremental Simple-PCA is a fast incremental learning algorithm based on Simple-PCA. This algorithm need not hold all training samples because it enables update of an eigenvector according to incremental samples. Moreover, this algorithm has an advantage that it can calculate the eigenvector at high-speed because matrix calculation is not needed. However, it had a problem in convergence performance of the eigenvector. Thus, in this paper, we try the improvement of this algorithm from the aspect of convergence performance. We performed computer simulations using UCI datasets to verify the effectiveness of the proposed algorithm. As a result, its availability was confirmed from the standpoint of recognition accuracy and convergence performance of the eigenvector compared with the Incremental Simple-PCA.",
keywords = "Cincremental learning Cdimensional reduction Cpattern recognition, PCA, Simple-PCA",
author = "Tadahiro Oyama and Choge, {H. Kipsang} and Stephen Karungaru and Satoru Tsuge and Yasue Mitsukura and Minoru Fukumi",
year = "2009",
doi = "10.1007/978-3-642-10677-4_59",
language = "English",
isbn = "3642106765",
volume = "5863 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
number = "PART 1",
pages = "520--529",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
edition = "PART 1",

}

TY - GEN

T1 - Improvement algorithm for approximate incremental learning

AU - Oyama, Tadahiro

AU - Choge, H. Kipsang

AU - Karungaru, Stephen

AU - Tsuge, Satoru

AU - Mitsukura, Yasue

AU - Fukumi, Minoru

PY - 2009

Y1 - 2009

N2 - This paper presents an improved algorithm of Incremental Simple-PCA. The Incremental Simple-PCA is a fast incremental learning algorithm based on Simple-PCA. This algorithm need not hold all training samples because it enables update of an eigenvector according to incremental samples. Moreover, this algorithm has an advantage that it can calculate the eigenvector at high-speed because matrix calculation is not needed. However, it had a problem in convergence performance of the eigenvector. Thus, in this paper, we try the improvement of this algorithm from the aspect of convergence performance. We performed computer simulations using UCI datasets to verify the effectiveness of the proposed algorithm. As a result, its availability was confirmed from the standpoint of recognition accuracy and convergence performance of the eigenvector compared with the Incremental Simple-PCA.

AB - This paper presents an improved algorithm of Incremental Simple-PCA. The Incremental Simple-PCA is a fast incremental learning algorithm based on Simple-PCA. This algorithm need not hold all training samples because it enables update of an eigenvector according to incremental samples. Moreover, this algorithm has an advantage that it can calculate the eigenvector at high-speed because matrix calculation is not needed. However, it had a problem in convergence performance of the eigenvector. Thus, in this paper, we try the improvement of this algorithm from the aspect of convergence performance. We performed computer simulations using UCI datasets to verify the effectiveness of the proposed algorithm. As a result, its availability was confirmed from the standpoint of recognition accuracy and convergence performance of the eigenvector compared with the Incremental Simple-PCA.

KW - Cincremental learning Cdimensional reduction Cpattern recognition

KW - PCA

KW - Simple-PCA

UR - http://www.scopus.com/inward/record.url?scp=76649137602&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=76649137602&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-10677-4_59

DO - 10.1007/978-3-642-10677-4_59

M3 - Conference contribution

SN - 3642106765

SN - 9783642106767

VL - 5863 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 520

EP - 529

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -