Bidirectional associative memory (BAM) is a form of heteroassociative memory that can recall and restore patterns. Being a Hebbian learning-based memory, it has the problem of very low capacity. A training paradigm called the Pseudo-Relaxation Learning Algorithm (PRLAB) greatly increases the memory capacity of BAM. The combination of Hebbian learning with BAM in the Quick Learning training algorithm has increased its storage capacity and robustness to noisy inputs while greatly reducing the number of iterations. But in these learning algorithms, if a solution domain does not exist for the set, learning of the connection weights will not converge and recall of the training pattern is not guaranteed. This paper proposes a new method of solving this problem, in which training patterns are multimodalized by attaching random numbers to them if it is estimated that learning is not converging. Thus even if there is a contradiction in the simultaneous inequalities used in the training patterns, convergence is artificially forced and correct recall becomes possible. Simulations indicate the effectiveness of the new method in both the presence and absence of untrainable patterns.
|ジャーナル||Systems and Computers in Japan|
|出版ステータス||Published - 2001 1月 1|
ASJC Scopus subject areas