Research

Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency

Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus […]

Appearance-Based Gaze Estimation in the Wild

Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants […]

Learning-by-Synthesis for Appearance-based 3D Gaze Estimation

Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person-and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to […]

Graph-based Joint Clustering of Fixations and Visual Entities

We present a method that extracts groups of fixations and image regions for the purpose of gaze analysis and image understanding. Since the attentional relationship between visual entities conveys rich information, automatically determining the relationship provides us a semantic representation of images. We show that, by jointly clustering human gaze and visual entities, it is […]

Coupling Eye-Motion and Ego-Motion features for First-Person Activity Recognition

We focus on the use of first-person eye movement and ego-motion as a means of understanding and recognizing indoor activities from an “inside-out” camera system. We show that when eye movement captured by an inside looking camera is used in tandem with ego-motion features extracted from an outside looking camera, the classification accuracy of first-person […]

Touch-Consistent Perspective for Direct Interaction under Motion Parallax

A 3D display is a key component to present virtual space in an intuitive way to users. A motion parallax-based 3D display can be easily combined with multi-touch surfaces, and it is expected to bring a natural experience of viewing and controlling 3D space. However, since virtual objects are rendered in accordance with the head […]

Incorporating Visual Field Characteristics into a Saliency Map

Characteristics of the human visual field are well known to be different in central (fovea) and peripheral areas. Existing computational models of visual saliency, however, do not take into account this biological evidence. The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point. […]

Attention Prediction in Egocentric Video Using Motion and Visual Saliency

We propose a method of predicting human egocentric visual attention using bottom-up visual saliency and egomotion information. Computational models of visual saliency are often employed to predict human attention; however, its mechanism and effectiveness have not been fully explored in egocentric vision. The purpose of our framework is to compute attention maps from an egocentric […]

Calibration-free Gaze Estimation using Visual Saliency

We propose a calibration-free gaze sensing method using visual saliency maps. Our goal is to construct a gaze estimator only using eye images captured from a person watching a video clip. The key is treating saliency maps of the video frames as probability distributions of gaze points. To efficiently identify gaze points from saliency maps, […]

An Incremental Learning Method for Unconstrained Gaze Estimation

This paper presents an online learning algorithm for appearance-based gaze estimation that allows free head movement in a casual desktop environment. Our method avoids the lengthy calibration stage using an incremental learning approach. Our system keeps running as a background process on the desktop PC and continuously updates the estimation parameters by taking user’s operations […]

Older Entries