Adaptive quick learning for associative memory

Tomoshige Yoshihara, Masafumi Hagiwara

Research output: Contribution to journalArticle

Abstract

Bidirectional associative memory (BAM) is a form of heteroassociative memory that can recall and restore patterns. Being a Hebbian learning-based memory, it has the problem of very low capacity. A training paradigm called the Pseudo-Relaxation Learning Algorithm (PRLAB) greatly increases the memory capacity of BAM. The combination of Hebbian learning with BAM in the Quick Learning training algorithm has increased its storage capacity and robustness to noisy inputs while greatly reducing the number of iterations. But in these learning algorithms, if a solution domain does not exist for the set, learning of the connection weights will not converge and recall of the training pattern is not guaranteed. This paper proposes a new method of solving this problem, in which training patterns are multimodalized by attaching random numbers to them if it is estimated that learning is not converging. Thus even if there is a contradiction in the simultaneous inequalities used in the training patterns, convergence is artificially forced and correct recall becomes possible. Simulations indicate the effectiveness of the new method in both the presence and absence of untrainable patterns.

Original languageEnglish
Pages (from-to)53-61
Number of pages9
JournalSystems and Computers in Japan
Volume32
Issue number1
DOIs
Publication statusPublished - 2001 Jan

Fingerprint

Associative Memory
Bidirectional Associative Memory
Data storage equipment
Hebbian Learning
Learning Algorithm
Learning algorithms
Training Algorithm
Storage Capacity
Random number
Paradigm
Learning
Robustness
Converge
Iteration
Training
Simulation

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Theoretical Computer Science
  • Computational Theory and Mathematics

Cite this

Adaptive quick learning for associative memory. / Yoshihara, Tomoshige; Hagiwara, Masafumi.

In: Systems and Computers in Japan, Vol. 32, No. 1, 01.2001, p. 53-61.

Research output: Contribution to journalArticle

Yoshihara, Tomoshige ; Hagiwara, Masafumi. / Adaptive quick learning for associative memory. In: Systems and Computers in Japan. 2001 ; Vol. 32, No. 1. pp. 53-61.
@article{45a5bc83d0944e20b5ef802ccd6bf562,
title = "Adaptive quick learning for associative memory",
abstract = "Bidirectional associative memory (BAM) is a form of heteroassociative memory that can recall and restore patterns. Being a Hebbian learning-based memory, it has the problem of very low capacity. A training paradigm called the Pseudo-Relaxation Learning Algorithm (PRLAB) greatly increases the memory capacity of BAM. The combination of Hebbian learning with BAM in the Quick Learning training algorithm has increased its storage capacity and robustness to noisy inputs while greatly reducing the number of iterations. But in these learning algorithms, if a solution domain does not exist for the set, learning of the connection weights will not converge and recall of the training pattern is not guaranteed. This paper proposes a new method of solving this problem, in which training patterns are multimodalized by attaching random numbers to them if it is estimated that learning is not converging. Thus even if there is a contradiction in the simultaneous inequalities used in the training patterns, convergence is artificially forced and correct recall becomes possible. Simulations indicate the effectiveness of the new method in both the presence and absence of untrainable patterns.",
author = "Tomoshige Yoshihara and Masafumi Hagiwara",
year = "2001",
month = "1",
doi = "10.1002/1520-684X(200101)32:1<53::AID-SCJ7>3.0.CO;2-M",
language = "English",
volume = "32",
pages = "53--61",
journal = "Systems and Computers in Japan",
issn = "0882-1666",
publisher = "John Wiley and Sons Inc.",
number = "1",

}

TY - JOUR

T1 - Adaptive quick learning for associative memory

AU - Yoshihara, Tomoshige

AU - Hagiwara, Masafumi

PY - 2001/1

Y1 - 2001/1

N2 - Bidirectional associative memory (BAM) is a form of heteroassociative memory that can recall and restore patterns. Being a Hebbian learning-based memory, it has the problem of very low capacity. A training paradigm called the Pseudo-Relaxation Learning Algorithm (PRLAB) greatly increases the memory capacity of BAM. The combination of Hebbian learning with BAM in the Quick Learning training algorithm has increased its storage capacity and robustness to noisy inputs while greatly reducing the number of iterations. But in these learning algorithms, if a solution domain does not exist for the set, learning of the connection weights will not converge and recall of the training pattern is not guaranteed. This paper proposes a new method of solving this problem, in which training patterns are multimodalized by attaching random numbers to them if it is estimated that learning is not converging. Thus even if there is a contradiction in the simultaneous inequalities used in the training patterns, convergence is artificially forced and correct recall becomes possible. Simulations indicate the effectiveness of the new method in both the presence and absence of untrainable patterns.

AB - Bidirectional associative memory (BAM) is a form of heteroassociative memory that can recall and restore patterns. Being a Hebbian learning-based memory, it has the problem of very low capacity. A training paradigm called the Pseudo-Relaxation Learning Algorithm (PRLAB) greatly increases the memory capacity of BAM. The combination of Hebbian learning with BAM in the Quick Learning training algorithm has increased its storage capacity and robustness to noisy inputs while greatly reducing the number of iterations. But in these learning algorithms, if a solution domain does not exist for the set, learning of the connection weights will not converge and recall of the training pattern is not guaranteed. This paper proposes a new method of solving this problem, in which training patterns are multimodalized by attaching random numbers to them if it is estimated that learning is not converging. Thus even if there is a contradiction in the simultaneous inequalities used in the training patterns, convergence is artificially forced and correct recall becomes possible. Simulations indicate the effectiveness of the new method in both the presence and absence of untrainable patterns.

UR - http://www.scopus.com/inward/record.url?scp=0035151909&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0035151909&partnerID=8YFLogxK

U2 - 10.1002/1520-684X(200101)32:1<53::AID-SCJ7>3.0.CO;2-M

DO - 10.1002/1520-684X(200101)32:1<53::AID-SCJ7>3.0.CO;2-M

M3 - Article

AN - SCOPUS:0035151909

VL - 32

SP - 53

EP - 61

JO - Systems and Computers in Japan

JF - Systems and Computers in Japan

SN - 0882-1666

IS - 1

ER -