Publications

[Google Scholar] [DBLP]

Journals

  1. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Networks for Determining Surface Normal and Reflectances”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), June 2020 (online first).
  2. Hiroaki Santo, Michael Waechter, Wen-Yan Lin, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Geometric Point Light Source Calibration”, International Journal of Computer Vision (IJCV), Volume 128, pp. 1889–1912, March 2020.
  3. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “Mpiigaze: Real-world dataset and deep appearance-based gaze estimation”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Volume 41, Issue 1, pp. 162-175, November 2017.
  4. Marc Tonsen, Julian Steil, Yusuke Sugano, Andreas Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Volume 1, Issue 3, September 2017.
  5. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Gaze Estimation from Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis”, IEEE Transactions on Image Processing (TIP), Volume 24, Issue 11, pp. 3680-3693, June 2015.
  6. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “Appearance-based Gaze Estimation with Online Calibration from Mouse Operations”, IEEE Transactions on Human-Machine Systems (THMS), Volume 45, Issue 6, pp. 750-760, February 2015.
  7. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Adaptive Linear Regression for Appearance-Based Gaze Estimation”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Volume 36, Issue 10, pp. 2033-2046, October 2014.
  8. Yusuke Sugano, Yasunori Ozaki, Hiroshi Kasai, Keisuke Ogaki, Yoichi Sato, “Image preference estimation with a data-driven approach: A comparative study between gaze and image features”, Journal of Eye Movement Research, Volume 7, Issue 3, No. 5, pp. 1-9, March 2014.
  9. Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato, “Learning gaze biases with head motion for head pose-free gaze estimation”, Image and Vision Computing, Volume 32, Issue 3, pp. 169-179, March 2014.
  10. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Head Direction Estimation from Low Resolution Images with Scene Adaptation”, Computer Vision and Image Understanding, Volume 117, Issue 10, pp. 1502-1511, October 2013.
  11. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Graph-based Joint Clustering of Fixations and Visual Entities”, ACM Transactions on Applied Perception (TAP), Volume 10, Issue 2, Article 10, pp. 1-16, June 2013.
  12. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Appearance-based Gaze Estimation using Visual Saliency”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 35, no. 2, pp. 329-341, February 2013.

Refereed Conferences

  1. Yuri Nakao, Yusuke Sugano, “Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception”, in Proc. 11th Nordic Conference on Human-Computer Interaction (NordiCHI 2020).
  2. Yifei Huang, Yusuke Sugano, Yoichi Sato, “Improving Action Segmentation via Graph-Based Temporal Reasoning”, in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020).
  3. Xucong Zhang, Yusuke Sugano, Andreas Bulling, Otmar Hilliges, “Learning-based Region Selection for End-to-End Gaze Estimation”, in Proc. 31st British Machine Vision Conference (BMVC 2020).
  4. Tatsuya Ishibashi, Yuri Nakao, Yusuke Sugano, “Investigating audio data visualization for interactive sound recognition”, in Proc. 25th International Conference on Intelligent User Interfaces (IUI 2020).
  5. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications”, in Proc. 37th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2019).
  6. Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita, “Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data”, in Proc. 14th Asian Conference on Computer Vision (ACCV 2018).
  7. Tatsuya Ishibashi, Yusuke Sugano, Yasuyuki Matsushita, “Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity”, in Adjunct Proc. 31st ACM Symposium on User Interface Software and Technology (UIST 2018 Posters).
  8. Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling”, in Proc. European Conference on Computer Vision (ECCV 2018).
  9. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices”, in Proc. 36th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2018).
  10. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Revisiting data normalization for appearance-based gaze estimation”, in Proc. 10th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2018).
  11. Keita Higuchi, Soichiro Matsuda, Rie Kamikubo, Takuya Enomoto, Yusuke Sugano, Junichi Yamamoto, Yoichi Sato, “Visualizing Gaze Direction to Support Video Coding of Social Attention for Children with Autism Spectrum Disorder”, in Proc. 23rd International Conference on Intelligent User Interfaces (IUI 2018).
  12. Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross MacDonald, “A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks”, in Proc. 11th edition of the Language Resources and Evaluation Conference (LREC 2018).
  13. Julian Steil, Philipp M. Müller, Yusuke Sugano, Andreas Bulling, “Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors”, Proc. 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2018). (Best paper)
  14. Ryohei Kuga, Asako Kanezaki, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections”, in Proc. IEEE/ISPRS 4th Joint Workshop on Multi-Sensor Fusion for Dynamic Scene Understanding (in conjunction with ICCV 2017).
  15. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Network”, in Proc. 1st International Workshop on Physics Based Vision meets Deep Learning (in conjunction with ICCV 2017).
  16. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation”, in Proc. 1st International Workshop on Deep Affective Learning and Context Modeling (in conjunction with CVPR 2017).
  17. Michaela Klauck, Yusuke Sugano, Andreas Bulling, “Noticeable or Distractive?: A Design Space for Gaze-Contingent User Interface Notifications”, in Proc. 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘17).
  18. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery”, in Proc. 30th ACM Symposium on User Interface Software and Technology (UIST 2017). (Best paper honorable mention)
  19. Yusuke Sugano, Xucong Zhang, Andreas Bulling, “AggreGaze: Collective Estimation of Audience Attention on Public Displays”, in Proc. 29th ACM Symposium on User Interface Software and Technology (UIST 2016). (Best paper honorable mention)
  20. Pingmei Xu, Yusuke Sugano, Andreas Bulling, “Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces”, in Proc. 34th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2016). (Best paper honorable mention)
  21. Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling, “3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers”, in Proc. 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016).
  22. Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments”, in Proc. 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016).
  23. Erroll Wood, Tadas Baltrušaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling, “Rendering of Eyes for Eye-Shape Registration and Gaze Estimation”, in Proc. IEEE International Conference on Computer Vision (ICCV 2015).
  24. Yusuke Sugano, Andreas Bulling, “Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency”, in Proc. 28th ACM Symposium on User Interface Software and Technology (UIST 2015).
  25. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “Appearance-based Gaze Estimation in the Wild”, in Proc. 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015).
  26. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Learning-by-Synthesis for Appearance-based 3D Gaze Estimation”, in Proc. 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014).
  27. Binbin Ye, Yusuke Sugano, Yoichi Sato, “Influence of Stimulus and Viewing Task Types on a Learning-based Visual Saliency Model”, in Proc. 2014 ACM Symposium on Eye Tracking Research and Applications (ETRA 2014).
  28. Isarun Chamveha, Yusuke Sugano, Yoichi Sato, Akihiro Sugimoto, “Social Group Discovery from Surveillance Videos: A Data-Driven Approach with Attention-Based Cues”, in Proc. 24th British Machine Vision Conference (BMVC 2013).
  29. Yusuke Sugano, Hiroshi Kasai, Keisuke Ogaki, Yoichi Sato, “Image Preference Estimation from Eye Movements with A Data-driven Approach”, to appear in Proc. 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (PETMEI 2013).
  30. Yusuke Sugano, Kazuma Harada, Yoichi Sato, “Touch-consistent perspective for direct interaction under motion parallax”, in Proc. ACM Interactive Tabletops and Surfaces Conference (ITS 2012).
  31. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Head Pose-Free Appearance-Based Gaze Sensing Via Eye Image Synthesis”, in Proc. 21st International Conference on Pattern Recognition (ICPR 2012).
  32. Keisuke Ogaki, Kris Kitani, Yusuke Sugano, Yoichi Sato, “Coupling Eye-Motion and Ego-Motion Features for First-Person Activity Recognition”, in Proc. IEEE Workshop on Egocentric Vision (in conjunction with CVPR 2012).
  33. Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Incorporating Visual Field Characteristics into a Saliency Map”, in Proc. the 7th International Symposium on Eye Tracking Research & Applications (ETRA 2012).
  34. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Inferring Human Gaze from Appearance via Adaptive Linear Regression”, in Proc. IEEE International Conference on Computer Vision (ICCV 2011).
  35. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Appearance-Based Head Pose Estimation with Scene-Specific Adaptation”, in Proc. IEEE Workshop on Visual Surveillance (VS 2011).
  36. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Attention Prediction in Egocentric Video using Motion and Visual Saliency”, in Proc. Pacific-Rim Symposium on Image and Video Technology (PSIVT 2011).
  37. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “A Head Pose-free Approach for Appearance-based Gaze Estimation”, in Proc. British Machine Vision Conference (BMVC 2011).
  38. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Can Saliency Map Models Predict Human Egocentric Visual Attention?”, in Proc. International Workshop on Gaze Sensing and Interactions, 2010.
  39. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Calibration-free gaze sensing using saliency maps”, in Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010).
  40. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “An Incremental Learning Method for Unconstrained Gaze Estimation”, in Proc. European Conference on Computer Vision (ECCV 2008).
  41. Yusuke Sugano, Yoichi Sato, “Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models”, in Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2007).
  42. Hiromichi Hashizume, Ayumu Kaneko, Yusuke Sugano, Koji Yatani, Masanori Sugimoto, “Fast and Accurate Positioning Technique Using Ultrasonic Phase Accordance Method” in Proc. the IEEE Region 10 Conference (TenCon 2005).

Book Chapters

  1. Asako Kanezaki, Ryohei Kuga, Yusuke Sugano, Yasuyuki Matsushita, “Deep Learning for Multimodal Data Fusion”, In Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino (Eds.), Multimodal Scene Understanding, Academic Press, 2019.
  2. Yoichi Sato, Yusuke Sugano, Akihiro Sugimoto, Yoshinori Kuno, Hideki Koike, “Sensing and Controlling Human Gaze in Daily Living Space for Human-Harmonized Information Environments”, In Toyoaki Nishida (Ed.), Human-Harmonized Information Technology, Volume 1, Springer, 2016.

Invited Talks

  1. Yusuke Sugano, “Appearance-based Gaze Estimation: What We Have Done and What We Should Do”, The 1st Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2019) (in conjunction with ICCV 2019), October 2019.
  2. Yusuke Sugano, “Appearance-based Gaze Estimation for Real-World Eye Tracking Applications”, The International Workshop on Frontiers of Computer Vision (IW-FCV 2019), February 2019.
  3. Yusuke Sugano, “Learning-based Gaze Estimation towards Attention Sensing in the Wild”, International Workshop on Attention/Intention Understanding (in conjunction with ACCV 2018), December 2018.
  4. Yusuke Sugano, “Appearance-based Gaze Estimation for Daily-life Unconstrained Attention Sensing”, Active Vision, Attention, and Learning (in conjunction with ICDL-Epirob 2018), September 2018.
  5. Yusuke Sugano, “Appearance-based Gaze Estimation from Ubiquitous Cameras”, Half Day Workshop on Wearable MultiMedia (in conjunction with ICMR 2017), June 2017.
  6. Yusuke Sugano, “Vision-based Gaze and Attention Estimation for HCI Applications”, Asian CHI Symposium: Emerging HCI Research Collection (in conjunction with CHI 2017), May 2017.