Mood-learning public display: Adapting content design evolutionarily through viewers' involuntary gestures and movements

Ken Nagao, Issei Fujishiro

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Due to the recent development of underlying hardware technology and improvement in installing environments, public display has been becoming more common and attracting more attention as a new type of signage. Any signage is required to make its content more attractive to its viewers by evaluating the current attractiveness on the fly, in order to deliver the message from the sender more effectively. However, most previous methods for public display require time to reflect the viewers' evaluations. In this paper, we present a novel system, called Mood-Learning Public Display, which automatically adapts its content design. This system utilizes viewers' involuntary behaviors as a sign of evaluation to make the content design more adapted to local viewers' tastes evolutionarily on site. The system removes the current gap between viewers' expectations and the content actually displayed on the display, and makes efficient mutual transmission of information between the cyberworld and the reality.

Original languageEnglish
Pages (from-to)1991-1999
Number of pages9
JournalIEICE Transactions on Information and Systems
VolumeE97-D
Issue number8
DOIs
Publication statusPublished - 2014 Aug

Keywords

  • Genetic algorithm
  • Human behavior recognition
  • Image processing
  • Public display

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Mood-learning public display: Adapting content design evolutionarily through viewers' involuntary gestures and movements'. Together they form a unique fingerprint.

  • Cite this