Do neural models learn systematicity of monotonicity inference in natural language?

Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui

研究成果: Conference contribution

抄録

Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.

本文言語English
ホスト出版物のタイトルACL 2020 - 58th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference
出版社Association for Computational Linguistics (ACL)
ページ6105-6117
ページ数13
ISBN(電子版)9781952148255
出版ステータスPublished - 2020
イベント58th Annual Meeting of the Association for Computational Linguistics, ACL 2020 - Virtual, Online, United States
継続期間: 2020 7月 52020 7月 10

出版物シリーズ

名前Proceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN(印刷版)0736-587X

Conference

Conference58th Annual Meeting of the Association for Computational Linguistics, ACL 2020
国/地域United States
CityVirtual, Online
Period20/7/520/7/10

ASJC Scopus subject areas

  • コンピュータ サイエンスの応用
  • 言語学および言語
  • 言語および言語学

フィンガープリント

「Do neural models learn systematicity of monotonicity inference in natural language?」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル