発表文献

[Google Scholar] [DBLP]

査読付き学術論文誌

  1. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Networks for Determining Surface Normal and Reflectances”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), June 2020 (online first).
  2. Hiroaki Santo, Michael Waechter, Wen-Yan Lin, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Geometric Point Light Source Calibration”, International Journal of Computer Vision (IJCV), Volume 128, pp. 1889–1912, March 2020.
  3. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “Mpiigaze: Real-world dataset and deep appearance-based gaze estimation”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Volume 41, Issue 1, pp. 162-175, November 2017.
  4. Marc Tonsen, Julian Steil, Yusuke Sugano, Andreas Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Volume 1, Issue 3, September 2017.
  5. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Gaze Estimation from Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis”, IEEE Transactions on Image Processing (TIP), Volume 24, Issue 11, pp. 3680-3693, June 2015.
  6. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “Appearance-based Gaze Estimation with Online Calibration from Mouse Operations”, IEEE Transactions on Human-Machine Systems (THMS), Volume 45, Issue 6, pp. 750-760, February 2015.
  7. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Adaptive Linear Regression for Appearance-Based Gaze Estimation”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Volume 36, Issue 10, pp. 2033-2046, October 2014.
  8. Yusuke Sugano, Yasunori Ozaki, Hiroshi Kasai, Keisuke Ogaki, Yoichi Sato, “Image preference estimation with a data-driven approach: A comparative study between gaze and image features”, Journal of Eye Movement Research, Volume 7, Issue 3, No. 5, pp. 1-9, March 2014.
  9. Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato, “Learning gaze biases with head motion for head pose-free gaze estimation”, Image and Vision Computing, Volume 32, Issue 3, pp. 169-179, March 2014.
  10. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Head Direction Estimation from Low Resolution Images with Scene Adaptation”, Computer Vision and Image Understanding, Volume 117, Issue 10, pp. 1502-1511, October 2013.
  11. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Graph-based Joint Clustering of Fixations and Visual Entities”, ACM Transactions on Applied Perception (TAP), Volume 10, Issue 2, Article 10, pp. 1-16, June 2013.
  12. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Appearance-based Gaze Estimation using Visual Saliency”, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 35, no. 2, pp. 329-341, February 2013.
  13. 菅野裕介, 松下康之, 佐藤洋一, “視覚的顕著性を用いた自己較正による視線推定”, 電子情報通信学会論文誌 Vol.J94-D, No.8, pp.-, August 2011.
  14. 菅野裕介, 松下康之, 佐藤洋一, 小池英樹, “マウス操作を利用した逐次学習による自由頭部姿勢下での視線推定”, 電子情報通信学会論文誌 Vol.J93-D, No.8, pp. 1386-1396, August 2010.
  15. 菅野裕介, 佐藤洋一, “顔変形をともなう3次元頭部姿勢の単眼推定”, 情報処理学会論文誌 コンピュータビジョンとイメージメディア, Vol. 1, No. 2 (CVIM 22), pp. 41-49, July 2008.
  16. 岡兼司, 菅野裕介, 佐藤洋一, “頭部変形モデルの自動構築をともなう実時間頭部姿勢推定”, 情報処理学会論文誌 コンピュータビジョンとイメージメディア, Vol. 47, No. SIG10 (CVIM 15), pp. 185-194, July 2006.

査読付き国際学会

  1. Yuri Nakao, Yusuke Sugano, “Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception”, in Proc. 11th Nordic Conference on Human-Computer Interaction (NordiCHI 2020).
  2. Yifei Huang, Yusuke Sugano, Yoichi Sato, “Improving Action Segmentation via Graph-Based Temporal Reasoning”, in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020).
  3. Xucong Zhang, Yusuke Sugano, Andreas Bulling, Otmar Hilliges, “Learning-based Region Selection for End-to-End Gaze Estimation”, in Proc. 31st British Machine Vision Conference (BMVC 2020).
  4. Tatsuya Ishibashi, Yuri Nakao, Yusuke Sugano, “Investigating audio data visualization for interactive sound recognition”, in Proc. 25th International Conference on Intelligent User Interfaces (IUI 2020).
  5. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications”, in Proc. 37th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2019).
  6. Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita, “Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data”, in Proc. 14th Asian Conference on Computer Vision (ACCV 2018).
  7. Tatsuya Ishibashi, Yusuke Sugano, Yasuyuki Matsushita, “Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity”, in Adjunct Proc. 31st ACM Symposium on User Interface Software and Technology (UIST 2018 Posters).
  8. Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-based Modeling”, in Proc. European Conference on Computer Vision (ECCV 2018).
  9. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices”, in Proc. 36th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2018).
  10. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Revisiting data normalization for appearance-based gaze estimation”, in Proc. 10th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2018).
  11. Keita Higuchi, Soichiro Matsuda, Rie Kamikubo, Takuya Enomoto, Yusuke Sugano, Junichi Yamamoto, Yoichi Sato, “Visualizing Gaze Direction to Support Video Coding of Social Attention for Children with Autism Spectrum Disorder”, in Proc. 23rd International Conference on Intelligent User Interfaces (IUI 2018).
  12. Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross MacDonald, “A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks”, in Proc. 11th edition of the Language Resources and Evaluation Conference (LREC 2018).
  13. Julian Steil, Philipp M. Müller, Yusuke Sugano, Andreas Bulling, “Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors”, Proc. 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2018). (Best paper)
  14. Ryohei Kuga, Asako Kanezaki, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections”, in Proc. IEEE/ISPRS 4th Joint Workshop on Multi-Sensor Fusion for Dynamic Scene Understanding (in conjunction with ICCV 2017).
  15. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Network”, in Proc. 1st International Workshop on Physics Based Vision meets Deep Learning (in conjunction with ICCV 2017).
  16. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation”, in Proc. 1st International Workshop on Deep Affective Learning and Context Modeling (in conjunction with CVPR 2017).
  17. Michaela Klauck, Yusuke Sugano, Andreas Bulling, “Noticeable or Distractive?: A Design Space for Gaze-Contingent User Interface Notifications”, in Proc. 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘17).
  18. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery”, in Proc. 30th ACM Symposium on User Interface Software and Technology (UIST 2017). (Best paper honorable mention)
  19. Yusuke Sugano, Xucong Zhang, Andreas Bulling, “AggreGaze: Collective Estimation of Audience Attention on Public Displays”, in Proc. 29th ACM Symposium on User Interface Software and Technology (UIST 2016). (Best paper honorable mention)
  20. Pingmei Xu, Yusuke Sugano, Andreas Bulling, “Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces”, in Proc. 34th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2016). (Best paper honorable mention)
  21. Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling, “3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers”, in Proc. 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016).
  22. Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments”, in Proc. 9th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2016).
  23. Erroll Wood, Tadas Baltrušaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling, “Rendering of Eyes for Eye-Shape Registration and Gaze Estimation”, in Proc. IEEE International Conference on Computer Vision (ICCV 2015).
  24. Yusuke Sugano, Andreas Bulling, “Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency”, in Proc. 28th ACM Symposium on User Interface Software and Technology (UIST 2015).
  25. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “Appearance-based Gaze Estimation in the Wild”, in Proc. 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015).
  26. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Learning-by-Synthesis for Appearance-based 3D Gaze Estimation”, in Proc. 27th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014).
  27. Binbin Ye, Yusuke Sugano, Yoichi Sato, “Influence of Stimulus and Viewing Task Types on a Learning-based Visual Saliency Model”, in Proc. 2014 ACM Symposium on Eye Tracking Research and Applications (ETRA 2014).
  28. Isarun Chamveha, Yusuke Sugano, Yoichi Sato, Akihiro Sugimoto, “Social Group Discovery from Surveillance Videos: A Data-Driven Approach with Attention-Based Cues”, in Proc. 24th British Machine Vision Conference (BMVC 2013).
  29. Yusuke Sugano, Hiroshi Kasai, Keisuke Ogaki, Yoichi Sato, “Image Preference Estimation from Eye Movements with A Data-driven Approach”, to appear in Proc. 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (PETMEI 2013).
  30. Yusuke Sugano, Kazuma Harada, Yoichi Sato, “Touch-consistent perspective for direct interaction under motion parallax”, in Proc. ACM Interactive Tabletops and Surfaces Conference (ITS 2012).
  31. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Head Pose-Free Appearance-Based Gaze Sensing Via Eye Image Synthesis”, in Proc. 21st International Conference on Pattern Recognition (ICPR 2012).
  32. Keisuke Ogaki, Kris Kitani, Yusuke Sugano, Yoichi Sato, “Coupling Eye-Motion and Ego-Motion Features for First-Person Activity Recognition”, in Proc. IEEE Workshop on Egocentric Vision (in conjunction with CVPR 2012).
  33. Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Incorporating Visual Field Characteristics into a Saliency Map”, in Proc. the 7th International Symposium on Eye Tracking Research & Applications (ETRA 2012).
  34. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Inferring Human Gaze from Appearance via Adaptive Linear Regression”, in Proc. IEEE International Conference on Computer Vision (ICCV 2011).
  35. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Appearance-Based Head Pose Estimation with Scene-Specific Adaptation”, in Proc. IEEE Workshop on Visual Surveillance (VS 2011).
  36. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Attention Prediction in Egocentric Video using Motion and Visual Saliency”, in Proc. Pacific-Rim Symposium on Image and Video Technology (PSIVT 2011).
  37. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “A Head Pose-free Approach for Appearance-based Gaze Estimation”, in Proc. British Machine Vision Conference (BMVC 2011).
  38. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Can Saliency Map Models Predict Human Egocentric Visual Attention?”, in Proc. International Workshop on Gaze Sensing and Interactions, 2010.
  39. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Calibration-free gaze sensing using saliency maps”, in Proc. 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010).
  40. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “An Incremental Learning Method for Unconstrained Gaze Estimation”, in Proc. European Conference on Computer Vision (ECCV 2008).
  41. Yusuke Sugano, Yoichi Sato, “Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models”, in Proc. IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2007).
  42. Hiromichi Hashizume, Ayumu Kaneko, Yusuke Sugano, Koji Yatani, Masanori Sugimoto, “Fast and Accurate Positioning Technique Using Ultrasonic Phase Accordance Method” in Proc. the IEEE Region 10 Conference (TenCon 2005).

Book Chapters

  1. Asako Kanezaki, Ryohei Kuga, Yusuke Sugano, Yasuyuki Matsushita, “Deep Learning for Multimodal Data Fusion”, In Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino (Eds.), Multimodal Scene Understanding, Academic Press, 2019.
  2. Yoichi Sato, Yusuke Sugano, Akihiro Sugimoto, Yoshinori Kuno, Hideki Koike, “Sensing and Controlling Human Gaze in Daily Living Space for Human-Harmonized Information Environments”, In Toyoaki Nishida (Ed.), Human-Harmonized Information Technology, Volume 1, Springer, 2016.

その他(査読なし学会、国内学会・研究会等)

  1. 坂下晴哉, 菅野裕介, 松下康之, “多様な道路環境におけるセマンティックセグメンテーションのための車載映像データセット”, 第22回 画像の認識・理解シンポジウム (MIRU 2019), 2019.
  2. 中村航太, Michael Waechter, 菅野裕介, 松下康之, “三次元復元のテクスチャリングにおける外れ値の検出と低減”, 第22回 画像の認識・理解シンポジウム (MIRU 2019), 2019.
  3. 竹村朋華, 菅野裕介, 松下康之, “乱択アルゴリズムを用いた特異ベクトルに基づくクラスタリングの高速化”, 第22回 画像の認識・理解シンポジウム (MIRU 2019), 2019.
  4. 宮内佑多朗, 鮫島正樹, 菅野裕介, 松下康之, “法線と見えの分離再構成による形状制約付き敵対的画像生成ネットワーク”, 第21回 画像の認識・理解シンポジウム (MIRU 2018), 2018.
  5. 村上亮介, Michael Waechter, 鮫島正樹, 菅野裕介, 松下康之, “照度差ステレオにおける拡張ランバートモデルの有効性の検討”, 第21回 画像の認識・理解シンポジウム (MIRU 2018), 2018.
  6. 山藤浩明, 鮫島正樹, 菅野裕介, Boxin Shi, 松下康之, “深層学習を用いた照度差ステレオ法に関する検討”, 第21回 画像の認識・理解シンポジウム (MIRU 2018), 2018.
  7. 中村拓紀, 鮫島正樹, 菅野裕介, 松下康之, “連続行動空間の深層強化学習を用いたジョブスケジューリング”, 電気学会 情報システム研究会, 2018.
  8. 水野喬雄, 鮫島正樹, 菅野裕介, 松下康之, “オプティカルフローを用いた半教師あり学習にもとづくセマンティックセグメンテーション”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), 2018.
  9. 石橋達矢, 鮫島正樹, 菅野裕介, 松下康之, “注視行動を利用した画像アノテーションに基づく曖昧さを反映した画像認識”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), 2018.
  10. 岩田大知, 鮫島正樹, 菅野裕介, 松下康之, “Mollifierを用いた非凸最適化手法の制約付き問題への拡張”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), 2018.
  11. 清水育, 鮫島正樹, 菅野裕介, 松下康之, “ラベル尤度を大局的特徴として用いたセマンティックセグメンテーション”, 電子情報通信学会パターン認識・メディア理解研究会 (PRMU), 2018.
  12. 小泉成司, 鮫島正樹, 菅野裕介, 松下康之, “スパース構造学習によるサーバの異常検知”, 電子情報通信学会情報通信マネジメント研究会, 2017.
  13. 村上亮介, 山藤浩明, 鮫島正樹, 菅野裕介, 松下康之, “立体キャリブレーションパターンを用いた影による近接光源位置推定”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), 2017.
  14. 宮内佑多朗, 鮫島正樹, 菅野裕介, 松下康之, “適応的なドロップアウト空間の学習によるセマンティックセグメンテーション”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), 2017.
  15. Arif Khan, Ingmar Steiner, Ross Macdonald, Yusuke Sugano, Andreas Bulling, “Scene viewing and gaze analysis during phonetic segmentation tasks”, European Conference on Eye Movements, August 2015.
  16. 尾崎安範, 菅野裕介, 佐藤洋一, “視線情報と画像特徴に基づく画像の選好推定”, 電子情報通信学会パターン認識・メディア理解研究会, March 2014.
  17. Binbin Ye, Yusuke Sugano, Yoichi Sato, “Investigating individual differences in learning-based visual saliency models”, 電子情報通信学会パターン認識・メディア理解研究会, September 2013.
  18. 笠井啓史, 大垣慶介, 菅野裕介, 佐藤洋一, “視線情報を用いた画像選好の識別”、映像情報メディア学会HI研究会, March 2013.
  19. 窪田秀行, 菅野裕介, 岡部孝弘, 佐藤洋一, 杉本晃宏, 開一夫, “人間の視野特性を考慮した学習に基づく視覚的顕著性モデル”, 画像の認識・理解シンポジウム (MIRU 2012), August 2012.
  20. 窪田秀行, 菅野裕介, 岡部孝弘, 佐藤洋一, 杉本晃宏, 開一夫, “視野特性を考慮した視覚的顕著性モデルの構築”, 日本視覚学会2012年冬季大会, January 2012.
  21. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “環境への自動適応を伴うアピアランスベース頭部姿勢推定”, 情報処理学会研究報告コンピュータビジョンとイメージメディア (CVIM), September 2011.
  22. 山田健太郎, 菅野裕介, 岡部孝弘, 佐藤洋一, 杉本晃宏, 開一夫, “一人称視点における視覚的顕著性マップモデルの性能評価”, 電子情報通信学会ヒューマン情報処理研究会, February 2011.
  23. 大垣慶介, 木谷クリス真実, 菅野裕介, 佐藤洋一, “自己運動と視線運動を用いた一人称視点映像からの自己動作認識”, 画像の認識・理解シンポジウム (MIRU 2012), August 2012.
  24. 原田一馬, 菅野裕介, 佐藤洋一, “運動視差を用いた直感的なマルチタッチインタラクション”, インタラクション2012, March 2012. (インタラクティブ論文賞)
  25. 山田健太郎, 菅野裕介, 岡部孝弘, 佐藤洋一, 杉本晃宏, 開一夫, “顕著性と自己運動に基づく一人称視点における視覚的注意推定”, 画像の認識・理解シンポジウム (MIRU 2011), July 2011.
  26. 菅野裕介, 松下康之, 佐藤洋一, “視覚的顕著性を用いた視線推定”, 画像の認識・理解シンポジウム (MIRU 2010), July 2010.
  27. 菅野裕介, 松下康之, 佐藤洋一, 小池英樹, “マウス操作情報を用いた逐次学習による視線推定”, 画像の認識・理解シンポジウム (MIRU 2009), July 2009.
  28. 菅野裕介, 佐藤洋一, “顔変形を伴う3次元頭部姿勢の単眼推定”, 画像の認識・理解シンポジウム (MIRU 2007), August 2007.
  29. 菅野裕介, 佐藤洋一, “表情変動を許容した実時間頭部姿勢推定のための個人間および個人内変動に対する顔形状推定”, 情報処理学会コンピュータビジョンとイメージメディア研究会, 2006-CVIM-156-21, pp. 179-186, November 2006.
  30. 菅野裕介, 金子歩, 矢谷浩司, 杉本雅則, 橋爪宏達, “超音波センサを用いた相対位置認識技術における通信及び測距技術”, 情報処理学会第67回全国大会, March 2005.
  31. 金子歩, 菅野裕介, 矢谷浩司, 杉本雅則, 橋爪宏達. “超音波センサを用いた相対位置認識デバイスの設計”, 情報処理学会第67回全国大会, March 2005.

招待講演

  1. Yusuke Sugano, “Appearance-based Gaze Estimation: What We Have Done and What We Should Do”, The 1st Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2019) (in conjunction with ICCV 2019), October 2019.
  2. Yusuke Sugano, “Appearance-based Gaze Estimation for Real-World Eye Tracking Applications”, The International Workshop on Frontiers of Computer Vision (IW-FCV 2019), February 2019.
  3. Yusuke Sugano, “Learning-based Gaze Estimation towards Attention Sensing in the Wild”, International Workshop on Attention/Intention Understanding (in conjunction with ACCV 2018), December 2018.
  4. Yusuke Sugano, “Appearance-based Gaze Estimation for Daily-life Unconstrained Attention Sensing”, Active Vision, Attention, and Learning (in conjunction with ICDL-Epirob 2018), September 2018.
  5. 菅野裕介, “機械学習による視線推定とその実世界応用”, 第20回情報論的学習理論ワークショップ, November 2017.
  6. Yusuke Sugano, “Appearance-based Gaze Estimation from Ubiquitous Cameras”, Half Day Workshop on Wearable MultiMedia (in conjunction with ICMR 2017), June 2017.
  7. Yusuke Sugano, “Vision-based Gaze and Attention Estimation for HCI Applications”, Asian CHI Symposium: Emerging HCI Research Collection (in conjunction with CHI 2017), May 2017.
  8. 菅野裕介, “データ駆動型アプローチによる視線・視覚的注意の推定”, 電子情報通信学会コンピュータビジョンとイメージメディア研究会, September 2013.