Layered telepresence: Simultaneous multi presence experience using eye gaze based perceptual awareness blending

M. H D Yamen Saraiji, Shota Sugimoto, Charith Lasantha Fernando, Kouta Minamizawa, Susumu Tachi

研究成果: Conference contribution

1 引用 (Scopus)

抄録

We propose "Layered Telepresence", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.

元の言語English
ホスト出版物のタイトルACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016
出版者Association for Computing Machinery, Inc
ISBN(電子版)9781450343725
DOI
出版物ステータスPublished - 2016 7 24
イベントACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016 - Anaheim, United States
継続期間: 2016 7 242016 7 28

Other

OtherACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016
United States
Anaheim
期間16/7/2416/7/28

Fingerprint

Robots
Helmet mounted displays
Audio systems
Multitasking
Television
Image processing

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition

これを引用

Saraiji, M. H. D. Y., Sugimoto, S., Fernando, C. L., Minamizawa, K., & Tachi, S. (2016). Layered telepresence: Simultaneous multi presence experience using eye gaze based perceptual awareness blending. : ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016 [2929467] Association for Computing Machinery, Inc. https://doi.org/10.1145/2929464.2929467

Layered telepresence : Simultaneous multi presence experience using eye gaze based perceptual awareness blending. / Saraiji, M. H D Yamen; Sugimoto, Shota; Fernando, Charith Lasantha; Minamizawa, Kouta; Tachi, Susumu.

ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016. Association for Computing Machinery, Inc, 2016. 2929467.

研究成果: Conference contribution

Saraiji, MHDY, Sugimoto, S, Fernando, CL, Minamizawa, K & Tachi, S 2016, Layered telepresence: Simultaneous multi presence experience using eye gaze based perceptual awareness blending. : ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016., 2929467, Association for Computing Machinery, Inc, ACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016, Anaheim, United States, 16/7/24. https://doi.org/10.1145/2929464.2929467
Saraiji MHDY, Sugimoto S, Fernando CL, Minamizawa K, Tachi S. Layered telepresence: Simultaneous multi presence experience using eye gaze based perceptual awareness blending. : ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016. Association for Computing Machinery, Inc. 2016. 2929467 https://doi.org/10.1145/2929464.2929467
Saraiji, M. H D Yamen ; Sugimoto, Shota ; Fernando, Charith Lasantha ; Minamizawa, Kouta ; Tachi, Susumu. / Layered telepresence : Simultaneous multi presence experience using eye gaze based perceptual awareness blending. ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016. Association for Computing Machinery, Inc, 2016.
@inproceedings{a9d330a729824f728ef1fbd13fd7095b,
title = "Layered telepresence: Simultaneous multi presence experience using eye gaze based perceptual awareness blending",
abstract = "We propose {"}Layered Telepresence{"}, a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.",
keywords = "Depth of field, Eye gaze, Perceptual awareness blending, Peripheral vision, Simultaneous multi presence",
author = "Saraiji, {M. H D Yamen} and Shota Sugimoto and Fernando, {Charith Lasantha} and Kouta Minamizawa and Susumu Tachi",
year = "2016",
month = "7",
day = "24",
doi = "10.1145/2929464.2929467",
language = "English",
booktitle = "ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016",
publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - Layered telepresence

T2 - Simultaneous multi presence experience using eye gaze based perceptual awareness blending

AU - Saraiji, M. H D Yamen

AU - Sugimoto, Shota

AU - Fernando, Charith Lasantha

AU - Minamizawa, Kouta

AU - Tachi, Susumu

PY - 2016/7/24

Y1 - 2016/7/24

N2 - We propose "Layered Telepresence", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.

AB - We propose "Layered Telepresence", a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into a priority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer, using image-processing techniques, and pushes the most weighted layer around the users gaze in to the foreground. All other layers are pushed back to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also each layer could represent any audio-visual content, such as video see-through HMD, television screen or even your PC screen enabling true multitasking.

KW - Depth of field

KW - Eye gaze

KW - Perceptual awareness blending

KW - Peripheral vision

KW - Simultaneous multi presence

UR - http://www.scopus.com/inward/record.url?scp=84984600067&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84984600067&partnerID=8YFLogxK

U2 - 10.1145/2929464.2929467

DO - 10.1145/2929464.2929467

M3 - Conference contribution

AN - SCOPUS:84984600067

BT - ACM SIGGRAPH 2016 Emerging Technologies, SIGGRAPH 2016

PB - Association for Computing Machinery, Inc

ER -