Augmented Echo State Networks with a feature layer and a nonlinear readout

Arnaud Rachez, Masafumi Hagiwara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Echo State Networks (ESNs) are an alternative to fully trained Recurrent Neural Networks (RNNs) showing State of the Art performance when applied to time series prediction. However, they have seldom been applied to abstract tasks and in the case of language modeling they require a number of units far superior to traditional RNNs in order to achieve similar performance. In this paper we propose a novel architecture by extending a conventional Echo State Network with a pre-recurrent feature layer and a nonlinear readout. The features are learned in a supervised way using a computationally cheap version of gradient descent and automatically capture grammatical similarity between words. They modifiy the dynamic of the network in a way that allows it to significantly outperform an ESN alone. The addition of a nonlinear readout is also investigated making the global system similar to a feed forward network with a memory layer.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks
DOIs
Publication statusPublished - 2012
Event2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012 - Brisbane, QLD, Australia
Duration: 2012 Jun 102012 Jun 15

Other

Other2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012
CountryAustralia
CityBrisbane, QLD
Period12/6/1012/6/15

Fingerprint

Recurrent neural networks
Time series
Data storage equipment

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Rachez, A., & Hagiwara, M. (2012). Augmented Echo State Networks with a feature layer and a nonlinear readout. In Proceedings of the International Joint Conference on Neural Networks [6252505] https://doi.org/10.1109/IJCNN.2012.6252505

Augmented Echo State Networks with a feature layer and a nonlinear readout. / Rachez, Arnaud; Hagiwara, Masafumi.

Proceedings of the International Joint Conference on Neural Networks. 2012. 6252505.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Rachez, A & Hagiwara, M 2012, Augmented Echo State Networks with a feature layer and a nonlinear readout. in Proceedings of the International Joint Conference on Neural Networks., 6252505, 2012 Annual International Joint Conference on Neural Networks, IJCNN 2012, Part of the 2012 IEEE World Congress on Computational Intelligence, WCCI 2012, Brisbane, QLD, Australia, 12/6/10. https://doi.org/10.1109/IJCNN.2012.6252505
Rachez A, Hagiwara M. Augmented Echo State Networks with a feature layer and a nonlinear readout. In Proceedings of the International Joint Conference on Neural Networks. 2012. 6252505 https://doi.org/10.1109/IJCNN.2012.6252505
Rachez, Arnaud ; Hagiwara, Masafumi. / Augmented Echo State Networks with a feature layer and a nonlinear readout. Proceedings of the International Joint Conference on Neural Networks. 2012.
@inproceedings{36844eb5f2654ae7a104dc528855459c,
title = "Augmented Echo State Networks with a feature layer and a nonlinear readout",
abstract = "Echo State Networks (ESNs) are an alternative to fully trained Recurrent Neural Networks (RNNs) showing State of the Art performance when applied to time series prediction. However, they have seldom been applied to abstract tasks and in the case of language modeling they require a number of units far superior to traditional RNNs in order to achieve similar performance. In this paper we propose a novel architecture by extending a conventional Echo State Network with a pre-recurrent feature layer and a nonlinear readout. The features are learned in a supervised way using a computationally cheap version of gradient descent and automatically capture grammatical similarity between words. They modifiy the dynamic of the network in a way that allows it to significantly outperform an ESN alone. The addition of a nonlinear readout is also investigated making the global system similar to a feed forward network with a memory layer.",
author = "Arnaud Rachez and Masafumi Hagiwara",
year = "2012",
doi = "10.1109/IJCNN.2012.6252505",
language = "English",
isbn = "9781467314909",
booktitle = "Proceedings of the International Joint Conference on Neural Networks",

}

TY - GEN

T1 - Augmented Echo State Networks with a feature layer and a nonlinear readout

AU - Rachez, Arnaud

AU - Hagiwara, Masafumi

PY - 2012

Y1 - 2012

N2 - Echo State Networks (ESNs) are an alternative to fully trained Recurrent Neural Networks (RNNs) showing State of the Art performance when applied to time series prediction. However, they have seldom been applied to abstract tasks and in the case of language modeling they require a number of units far superior to traditional RNNs in order to achieve similar performance. In this paper we propose a novel architecture by extending a conventional Echo State Network with a pre-recurrent feature layer and a nonlinear readout. The features are learned in a supervised way using a computationally cheap version of gradient descent and automatically capture grammatical similarity between words. They modifiy the dynamic of the network in a way that allows it to significantly outperform an ESN alone. The addition of a nonlinear readout is also investigated making the global system similar to a feed forward network with a memory layer.

AB - Echo State Networks (ESNs) are an alternative to fully trained Recurrent Neural Networks (RNNs) showing State of the Art performance when applied to time series prediction. However, they have seldom been applied to abstract tasks and in the case of language modeling they require a number of units far superior to traditional RNNs in order to achieve similar performance. In this paper we propose a novel architecture by extending a conventional Echo State Network with a pre-recurrent feature layer and a nonlinear readout. The features are learned in a supervised way using a computationally cheap version of gradient descent and automatically capture grammatical similarity between words. They modifiy the dynamic of the network in a way that allows it to significantly outperform an ESN alone. The addition of a nonlinear readout is also investigated making the global system similar to a feed forward network with a memory layer.

UR - http://www.scopus.com/inward/record.url?scp=84865079655&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84865079655&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2012.6252505

DO - 10.1109/IJCNN.2012.6252505

M3 - Conference contribution

SN - 9781467314909

BT - Proceedings of the International Joint Conference on Neural Networks

ER -