Multimodal logical inference system for visual-textual entailment

Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, Daisuke Bekki

Research output: Contribution to journalArticlepeer-review

Abstract

A large amount of research about multimodal inference across text and vision has been recently developed to obtain visually grounded word and sentence representations. In this paper, we use logic-based representations as unified meaning representations for texts and images and present an unsupervised multimodal logical inference system that can effectively prove entailment relations between them. We show that by combining semantic parsing and theorem proving, the system can handle semantically complex sentences for visual-textual inference.

Original languageEnglish
JournalUnknown Journal
Publication statusPublished - 2019 Jun 10
Externally publishedYes

ASJC Scopus subject areas

  • General

Fingerprint Dive into the research topics of 'Multimodal logical inference system for visual-textual entailment'. Together they form a unique fingerprint.

Cite this