We propose simple recurrent neural networks as probabilistic models for representing and predicting time-sequences. The proposed model has the advantage of providing forecasts that consist of probability densities instead of single guesses of future values. It turns out that the model can be viewed as a generalized hidden Markov model with a distributed representation. We devise an efficient learning algorithm for estimating the parameters of the model using dynamic programming. We present some very preliminary simulation results to demonstrate the potential capabilities of the model. The present analysis provides a new probabilistic formulation of learning in simple recurrent networks.
|Number of pages||6|
|Publication status||Published - 1995 Dec 1|
|Event||Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6) - Perth, Aust|
Duration: 1995 Nov 27 → 1995 Dec 1
|Other||Proceedings of the 1995 IEEE International Conference on Neural Networks. Part 1 (of 6)|
|Period||95/11/27 → 95/12/1|
ASJC Scopus subject areas