Capture, recognition, and visualization of human semantic interactions in meetings

Zhiwen Yu, Zhiyong Yu, Hideki Aoyama, Motoyuki Ozeki, Yuichi Nakamura

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)

Abstract

Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt a collaborative approach for capturing interactions by employing multiple sensors, such as video cameras, microphones, and motion sensors. A multimodal method is proposed for interaction recognition based on a variety of contexts, including head gestures, attention from others, speech tone, speaking time, interaction occasion (spontaneous or reactive), and information about the previous interaction. A support vector machines (SVM) classifier is used to classify human interaction based on these features. A graphical user interface called MMBrowser is presented for interaction visualization. Experimental results have shown the effectiveness of our approach.

Original languageEnglish
Title of host publication2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010
Pages107-115
Number of pages9
Publication statusPublished - 2010
Externally publishedYes
Event8th IEEE International Conference on Pervasive Computing and Communications, PerCom 2010 - Mannheim, Germany
Duration: 2010 Mar 292010 Apr 2

Other

Other8th IEEE International Conference on Pervasive Computing and Communications, PerCom 2010
CountryGermany
CityMannheim
Period10/3/2910/4/2

Fingerprint

Visualization
Semantics
Sensors
Video cameras
Microphones
Graphical user interfaces
Interaction
Support vector machines
Classifiers
Human
Social Dynamics
Sensor
Graphical User Interface
Gesture
Support Vector Machine
Camera
Classify
Classifier
Motion
Experimental Results

Keywords

  • Human interaction
  • Interaction capture
  • Interaction recognition
  • Smart meeting
  • Visualization

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Software
  • Theoretical Computer Science

Cite this

Yu, Z., Yu, Z., Aoyama, H., Ozeki, M., & Nakamura, Y. (2010). Capture, recognition, and visualization of human semantic interactions in meetings. In 2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010 (pp. 107-115). [5466987]

Capture, recognition, and visualization of human semantic interactions in meetings. / Yu, Zhiwen; Yu, Zhiyong; Aoyama, Hideki; Ozeki, Motoyuki; Nakamura, Yuichi.

2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010. 2010. p. 107-115 5466987.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Yu, Z, Yu, Z, Aoyama, H, Ozeki, M & Nakamura, Y 2010, Capture, recognition, and visualization of human semantic interactions in meetings. in 2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010., 5466987, pp. 107-115, 8th IEEE International Conference on Pervasive Computing and Communications, PerCom 2010, Mannheim, Germany, 10/3/29.
Yu Z, Yu Z, Aoyama H, Ozeki M, Nakamura Y. Capture, recognition, and visualization of human semantic interactions in meetings. In 2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010. 2010. p. 107-115. 5466987
Yu, Zhiwen ; Yu, Zhiyong ; Aoyama, Hideki ; Ozeki, Motoyuki ; Nakamura, Yuichi. / Capture, recognition, and visualization of human semantic interactions in meetings. 2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010. 2010. pp. 107-115
@inproceedings{2ec547bd02294836ad321f1a756deee5,
title = "Capture, recognition, and visualization of human semantic interactions in meetings",
abstract = "Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt a collaborative approach for capturing interactions by employing multiple sensors, such as video cameras, microphones, and motion sensors. A multimodal method is proposed for interaction recognition based on a variety of contexts, including head gestures, attention from others, speech tone, speaking time, interaction occasion (spontaneous or reactive), and information about the previous interaction. A support vector machines (SVM) classifier is used to classify human interaction based on these features. A graphical user interface called MMBrowser is presented for interaction visualization. Experimental results have shown the effectiveness of our approach.",
keywords = "Human interaction, Interaction capture, Interaction recognition, Smart meeting, Visualization",
author = "Zhiwen Yu and Zhiyong Yu and Hideki Aoyama and Motoyuki Ozeki and Yuichi Nakamura",
year = "2010",
language = "English",
isbn = "9781424453290",
pages = "107--115",
booktitle = "2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010",

}

TY - GEN

T1 - Capture, recognition, and visualization of human semantic interactions in meetings

AU - Yu, Zhiwen

AU - Yu, Zhiyong

AU - Aoyama, Hideki

AU - Ozeki, Motoyuki

AU - Nakamura, Yuichi

PY - 2010

Y1 - 2010

N2 - Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt a collaborative approach for capturing interactions by employing multiple sensors, such as video cameras, microphones, and motion sensors. A multimodal method is proposed for interaction recognition based on a variety of contexts, including head gestures, attention from others, speech tone, speaking time, interaction occasion (spontaneous or reactive), and information about the previous interaction. A support vector machines (SVM) classifier is used to classify human interaction based on these features. A graphical user interface called MMBrowser is presented for interaction visualization. Experimental results have shown the effectiveness of our approach.

AB - Human interaction is one of the most important characteristics of group social dynamics in meetings. In this paper, we propose an approach for capture, recognition, and visualization of human interactions. Unlike physical interactions (e.g., turn-taking and addressing), the human interactions considered here are incorporated with semantics, i.e., user intention or attitude toward a topic. We adopt a collaborative approach for capturing interactions by employing multiple sensors, such as video cameras, microphones, and motion sensors. A multimodal method is proposed for interaction recognition based on a variety of contexts, including head gestures, attention from others, speech tone, speaking time, interaction occasion (spontaneous or reactive), and information about the previous interaction. A support vector machines (SVM) classifier is used to classify human interaction based on these features. A graphical user interface called MMBrowser is presented for interaction visualization. Experimental results have shown the effectiveness of our approach.

KW - Human interaction

KW - Interaction capture

KW - Interaction recognition

KW - Smart meeting

KW - Visualization

UR - http://www.scopus.com/inward/record.url?scp=77956424560&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956424560&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:77956424560

SN - 9781424453290

SP - 107

EP - 115

BT - 2010 IEEE International Conference on Pervasive Computing and Communications, PerCom 2010

ER -