Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition

Yanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Audio-video group emotion recognition is a challenging task since it is difficult to gather a broad range of potential information to obtain meaningful emotional representations. Humans can easily understand emotions because they can associate implicit contextual knowledge (contained in our memory) when processing explicit information they can see and hear directly. This paper proposes an end-to-end architecture called implicit knowledge injectable cross attention audiovisual deep neural network (K-injection audiovisual network) that imitates this intuition. The K-injection audiovisual network is used to train an audiovisual model that can not only obtain audiovisual representations of group emotions through an explicit feature-based cross attention audiovisual subnetwork (audiovisual subnetwork), but is also able to absorb implicit knowledge of emotions through two implicit knowledge-based injection subnetworks (K-injection subnetwork). In addition, it is trained with explicit features and implicit knowledge but can easily make inferences using only explicit features. We define the region of interest (ROI) visual features and Melspectrogram audio features as explicit features, which obviously are present in the raw audio-video data. On the other hand, we define the linguistic and acoustic emotional representations that do not exist in the audio-video data as implicit knowledge. The implicit knowledge distilled by adapting video situation descriptions and basic acoustic features (MFCCs, pitch and energy) to linguistic and acoustic K-injection subnetworks is defined as linguistic and acoustic knowledge, respectively. When compared to the baseline accuracy for the testing set of 47.88%, the average of the audiovisual models trained with the (linguistic, acoustic and linguistic-acoustic) K-injection subnetworks achieved an overall accuracy of 66.40%.

Original languageEnglish
Title of host publicationICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery, Inc
Pages827-834
Number of pages8
ISBN (Electronic)9781450375818
DOIs
Publication statusPublished - 2020 Oct 21
Event22nd ACM International Conference on Multimodal Interaction, ICMI 2020 - Virtual, Online, Netherlands
Duration: 2020 Oct 252020 Oct 29

Publication series

NameICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction

Conference

Conference22nd ACM International Conference on Multimodal Interaction, ICMI 2020
CountryNetherlands
CityVirtual, Online
Period20/10/2520/10/29

Keywords

  • affective computing
  • machine learning for multimodal interaction
  • multimodal fusion and representation

ASJC Scopus subject areas

  • Hardware and Architecture
  • Human-Computer Interaction
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint Dive into the research topics of 'Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition'. Together they form a unique fingerprint.

Cite this