Characteristics of the human visual field are well known to be different in central (fovea) and peripheral areas. Existing computational models of visual saliency, however, do not take into account this biological evidence. The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point. This paper proposes to incorporate human visual field characteristics into visual saliency, and presents a computational model for producing such a saliency map. Our model integrates image features obtained by bottom-up computation in such a way that weights for the integration depend on the distance from the current gaze point where the weights are optimally learned using actual saccade data. The experimental results using a large number of fixation/saccade data with wide viewing angles demonstrate the advantage of our saliency map, showing that it can accurately predict the point where one looks next.
- Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto and Kazuo Hiraki, “Incorporating Visual Field Characteristics into a Saliency Map”, in Proc. the 7th International Symposium on Eye Tracking Research & Applications (ETRA2012), March 2012.