Calibration-free Gaze Estimation using Visual Saliency

framework

We propose a calibration-free gaze sensing method using visual saliency maps. Our goal is to construct a gaze estimator only using eye images captured from a person watching a video clip. The key is treating saliency maps of the video frames as probability distributions of gaze points. To efficiently identify gaze points from saliency maps, we aggregate saliency maps based on the similarity of eye appearances. We establish mapping between eye images to gaze points by Gaussian process regression. The experimental result shows that the proposed method works well with different people and video clips and achieves 6 degrees of accuracy, which is useful for estimating a person’s attention on monitors.

Publications

  • Yusuke Sugano, Yasuyuki Matsushita and Yoichi Sato, “Appearance-based Gaze Estimation using Visual Saliency”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2012.
  • Yusuke Sugano, Yasuyuki Matsushita and Yoichi Sato, “Calibration-free gaze sensing using saliency maps”, Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010).