Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition

Yanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara

研究成果: Conference contribution

1 被引用数 (Scopus)

抄録

Audio-video group emotion recognition is a challenging task since it is difficult to gather a broad range of potential information to obtain meaningful emotional representations. Humans can easily understand emotions because they can associate implicit contextual knowledge (contained in our memory) when processing explicit information they can see and hear directly. This paper proposes an end-to-end architecture called implicit knowledge injectable cross attention audiovisual deep neural network (K-injection audiovisual network) that imitates this intuition. The K-injection audiovisual network is used to train an audiovisual model that can not only obtain audiovisual representations of group emotions through an explicit feature-based cross attention audiovisual subnetwork (audiovisual subnetwork), but is also able to absorb implicit knowledge of emotions through two implicit knowledge-based injection subnetworks (K-injection subnetwork). In addition, it is trained with explicit features and implicit knowledge but can easily make inferences using only explicit features. We define the region of interest (ROI) visual features and Melspectrogram audio features as explicit features, which obviously are present in the raw audio-video data. On the other hand, we define the linguistic and acoustic emotional representations that do not exist in the audio-video data as implicit knowledge. The implicit knowledge distilled by adapting video situation descriptions and basic acoustic features (MFCCs, pitch and energy) to linguistic and acoustic K-injection subnetworks is defined as linguistic and acoustic knowledge, respectively. When compared to the baseline accuracy for the testing set of 47.88%, the average of the audiovisual models trained with the (linguistic, acoustic and linguistic-acoustic) K-injection subnetworks achieved an overall accuracy of 66.40%.

本文言語English
ホスト出版物のタイトルICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction
出版社Association for Computing Machinery, Inc
ページ827-834
ページ数8
ISBN(電子版)9781450375818
DOI
出版ステータスPublished - 2020 10 21
イベント22nd ACM International Conference on Multimodal Interaction, ICMI 2020 - Virtual, Online, Netherlands
継続期間: 2020 10 252020 10 29

出版物シリーズ

名前ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction

Conference

Conference22nd ACM International Conference on Multimodal Interaction, ICMI 2020
国/地域Netherlands
CityVirtual, Online
Period20/10/2520/10/29

ASJC Scopus subject areas

  • ハードウェアとアーキテクチャ
  • 人間とコンピュータの相互作用
  • コンピュータ サイエンスの応用
  • コンピュータ ビジョンおよびパターン認識

フィンガープリント

「Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル