GazeSim: Simulating foveated rendering using depth in eye gaze for VR

Yun Suen Pai, Benjamin Tag, Benjamin Outram, Noriyasu Vontin, Kazunori Sugiura, Kai Steven Kunze

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Citations (Scopus)

Abstract

We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field. Copyright is held by the owner/author(s).

Original languageEnglish
Title of host publicationSIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450343718
DOIs
Publication statusPublished - 2016 Jul 24
EventACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016 - Anaheim, United States
Duration: 2016 Jul 242016 Jul 28

Other

OtherACM International Conference on Computer Graphics and Interactive Techniques, SIGGRAPH 2016
CountryUnited States
CityAnaheim
Period16/7/2416/7/28

    Fingerprint

Keywords

  • Depth of field
  • Eye gaze
  • Foveated rendering
  • Virtual reality

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Software

Cite this

Pai, Y. S., Tag, B., Outram, B., Vontin, N., Sugiura, K., & Kunze, K. S. (2016). GazeSim: Simulating foveated rendering using depth in eye gaze for VR. In SIGGRAPH 2016 - ACM SIGGRAPH 2016 Posters [a75] Association for Computing Machinery, Inc. https://doi.org/10.1145/2945078.2945153