HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning

Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos

研究成果: Conference contribution

7 被引用数 (Scopus)

抄録

Large crowdsourced datasets are widely used for training and evaluating neural models on natural language inference (NLI). Despite these efforts, neural models have a hard time capturing logical inferences, including those licensed by phrase replacements, so-called monotonicity reasoning. Since no large dataset has been developed for monotonicity reasoning, it is still unclear whether the main obstacle is the size of datasets or the model architectures themselves. To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena. We add it to training data for the state-of-the-art neural models and evaluate them on test sets for monotonicity phenomena. The results showed that our data augmentation improved the overall accuracy. We also find that the improvement is better on monotonicity inferences with lexical replacements than on downward inferences with disjunction and modification. This suggests that some types of inferences can be improved by our data augmentation while others are immune to it.

本文言語English
ホスト出版物のタイトル*SEM@NAACL-HLT 2019 - 8th Joint Conference on Lexical and Computational Semantics
出版社Association for Computational Linguistics (ACL)
ページ250-255
ページ数6
ISBN(電子版)9781948087933
出版ステータスPublished - 2019
外部発表はい
イベント8th Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019 - Minneapolis, United States
継続期間: 2019 6月 62019 6月 7

出版物シリーズ

名前*SEM@NAACL-HLT 2019 - 8th Joint Conference on Lexical and Computational Semantics

Conference

Conference8th Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT 2019
国/地域United States
CityMinneapolis
Period19/6/619/6/7

ASJC Scopus subject areas

  • 情報システム
  • コンピュータ サイエンスの応用
  • 計算理論と計算数学

フィンガープリント

「HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル