Skip to main content

Publications

[Google Scholar] [DBLP]

Journal Papers
#

  1. Atsushi Takada, Wataru Kawabe, Yusuke Sugano, “Example-Based Conditioning for Text-to-Image Generative Models”, IEEE Access, Volume 12, pp. 162191-162203, October 2024.
  2. Mingtao Yue, Tomomi Sayuda, Miles Pennington, Yusuke Sugano, “Evaluating User Experience and Data Quality in Gamified Data Collection for Appearance-Based Gaze Estimation”, International Journal of Human–Computer Interaction, pp. 1-17, September 2024.
  3. Wataru Kawabe, Yuri Nakao, Akihisa Shitara, Yusuke Sugano, “Technical Understanding from Interactive Machine Learning Experience: a Study Through a Public Event for Science Museum Visitors”, Interacting with Computers, March 2024.
  4. Wataru Kawabe, Yusuke Sugano, “Image-to-Text Translation for Interactive Image Recognition: A Comparative User Study with Non-expert Users.”, J. Inf. Process., Volume 32, pp. 358-368, 2024.
  5. Tomoya Sato, Yusuke Sugano, Yoichi Sato, “Direction-of-Arrival Estimation for Mobile Agents Utilizing the Relationship Between Agent’s Trajectory and Binaural Audio.”, IEEE Access, Volume 12, pp. 75508-75519, 2024.
  6. Hiroaki Minoura, Tsubasa Hirakawa, Yusuke Sugano, Takayoshi Yamashita, Hironobu Fujiyoshi, “Utilizing Human Social Norms for Multimodal Trajectory Forecasting via Group-Based Forecasting Module.”, IEEE Trans. Intell. Veh., Volume 8, pp. 836-850, January 2023.
  7. Tianyi Liu, Yusuke Sugano, “Interactive Machine Learning on Edge Devices With User-in-the-Loop Sample Recommendation.”, IEEE Access, Volume 10, pp. 107346-107360, 2022.
  8. Tomoya Sato, Yusuke Sugano, Yoichi Sato, “Self-Supervised Learning for Audio-Visual Relationships of Videos With Stereo Sounds.”, IEEE Access, Volume 10, pp. 94273-94284, 2022.
  9. Hiroaki Santo, Michael Waechter, Wen-Yan Lin, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Geometric Point Light Source Calibration”, International Journal of Computer Vision, Volume 128, pp. 1889-1912, July 2020.
  10. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Networks for Determining Surface Normal and Reflectances”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume PP, pp. 1-1, 2020.
  11. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation.”, IEEE Trans. Pattern Anal. Mach. Intell., Volume 41, pp. 162-175, 2019.
  12. Marc Tonsen, Julian Steil, Yusuke Sugano, Andreas Bulling, “InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation”, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Volume 1, September 2017.
  13. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “Appearance-Based Gaze Estimation with Online Calibration from Mouse Operations”, IEEE Transactions on Human-Machine Systems, Volume 45, pp. 750-760, December 2015.
  14. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis”, IEEE TRANSACTIONS ON IMAGE PROCESSING, Volume 24, pp. 3680-3693, November 2015.
  15. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Adaptive Linear Regression for Appearance-Based Gaze Estimation”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Volume 36, pp. 2033-2046, October 2014.
  16. Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato, “Learning gaze biases with head motion for head pose-free gaze estimation”, IMAGE AND VISION COMPUTING, Volume 32, pp. 169-179, March 2014.
  17. Yusuke Sugano, Yasunori Ozaki, Hiroshi Kasai, Keisuke Ogaki, Yoichi Sato, “Image preference estimation with a data-driven approach: A comparative study between gaze and image features”, JOURNAL OF EYE MOVEMENT RESEARCH, Volume 7, pp. 1-9, 2014.
  18. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Graph-Based Joint Clustering of Fixations and Visual Entities”, ACM TRANSACTIONS ON APPLIED PERCEPTION, Volume 10, pp. 1-16, May 2013.
  19. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Appearance-Based Gaze Estimation Using Visual Saliency”, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Volume 35, pp. 329-341, February 2013.
  20. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Head direction estimation from low resolution images with scene adaptation”, Computer Vision and Image Understanding, Volume 117, pp. 1502-1511, 2013.
  21. Yusuke Sugano, Yoichi Sato, “Person-independent monocular tracking of face and facial actions with multilinear models”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume 4778 LNCS, pp. 58-70, October 2007.

Conference Papers
#

  1. Wataru Kawabe, Yusuke Sugano, “A Multimodal LLM-based Assistant for User-Centric Interactive Machine Learning.”, SIGGRAPH Asia Posters, pp. 7-2, 2024.
  2. Yoichiro Hisadome, Tianyi Wu, Jiawei Qin, Yusuke Sugano, “Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation.”, WACV, pp. 5973-5982, 2024.
  3. Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato, “Interact before Align: Leveraging Cross-Modal Knowledge for Domain Adaptive Action Recognition.”, IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 14702-14712, 2022.
  4. Jiawei Qin, Takuru Shimoyama, Yusuke Sugano, “Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation.”, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 4977-4987, 2022.
  5. Tianyi Wu, Yusuke Sugano, “Learning Video-Independent Eye Contact Segmentation from In-the-Wild Videos.”, Computer Vision - ACCV 2022 - 16th Asian Conference on Computer Vision, pp. 52-70, 2022.
  6. Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina González, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jáchym Kolár, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbeláez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard A. Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik, “Ego4D: Around the World in 3, 000 Hours of Egocentric Video.”, CVPR, pp. 18973-18990, 2022.
  7. Lijin Yang, Yifei Huang, Yusuke Sugano, Yoichi Sato, “Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips.”, 32nd British Machine Vision Conference 2021(BMVC), pp. 240-240, 2021.
  8. Yuri Nakao, Yusuke Sugano, “Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception”, Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, October 2020.
  9. Yifei Huang, Yusuke Sugano, Yoichi Sato, “Improving Action Segmentation via Graph-Based Temporal Reasoning”, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14021-14031, June 2020.
  10. Tatsuya Ishibashi, Yuri Nakao, Yusuke Sugano, “Investigating audio data visualization for interactive sound recognition”, Proceedings of the 25th International Conference on Intelligent User Interfaces, March 2020.
  11. Xucong Zhang, Yusuke Sugano, Andreas Bulling, Otmar Hilliges, “Learning-based Region Selection for End-to-End Gaze Estimation.”, 31st British Machine Vision Conference 2020(BMVC), 2020.
  12. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications.”, Proc. 37th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2019), pp. 416-416, 2019.
  13. Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, Yasuyuki Matsushita, “Deep Photometric Stereo Network”, Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Volume -2018, pp. 501-509, January 2018.
  14. Ryohei Kuga, Asako Kanezaki, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections”, Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Volume -2018, pp. 403-411, January 2018.
  15. Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita, “Shape-conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data.”, Proc. 14th Asian Conference on Computer Vision, 2018.
  16. Julian Steil, Philipp M. Müller, Yusuke Sugano, Andreas Bulling, “Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable Sensors.”, Proc. 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2018), 2018.
  17. Tatsuya Ishibashi, Yusuke Sugano, Yasuyuki Matsushita, “Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity.”, Adjunct Proc. 31st ACM Symposium on User Interface Software and Technology (UIST 2018 Posters), pp. 26-28, 2018.
  18. Arif Khan, Ingmar Steiner, Yusuke Sugano, Andreas Bulling, Ross MacDonald, “A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks.”, Proc. 11th edition of the Language Resources and Evaluation Conference (LREC 2018), 2018.
  19. Keita Higuchi, Soichiro Matsuda, Rie Kamikubo, Takuya Enomoto, Yusuke Sugano, Junichi Yamamoto, Yoichi Sato, “Visualizing Gaze Direction to Support Video Coding of Social Attention for Children with Autism Spectrum Disorder.”, Proc. 23rd International Conference on Intelligent User Interfaces (IUI 2018), pp. 571-582, 2018.
  20. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Revisiting data normalization for appearance-based gaze estimation.”, Proc. 10th ACM International Symposium on Eye Tracking Research & Applications (ETRA 2018), pp. 12-9, 2018.
  21. Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita, “Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-Based Modeling.”, Proc. European Conference on Computer Vision (ECCV 2018), pp. 3-19, 2018.
  22. Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, Andreas Bulling, “Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices.”, Proc. 36th ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2018), pp. 624-624, 2018.
  23. Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita, “Shape-Conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data.”, Proc. 14th Asian Conference on Computer Vision (ACCV 2018), pp. 438-453, 2018.
  24. Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Everyday eye contact detection using unsupervised gaze target discovery”, UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, pp. 193-203, October 2017.
  25. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “It’s Written All over Your Face: Full-Face Appearance-Based Gaze Estimation”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Volume -2017, pp. 2299-2308, August 2017.
  26. Michaela Klauck, Yusuke Sugano, Andreas Bulling, “Noticeable or distractive? A design space for gaze-contingent user interface notifications”, Conference on Human Factors in Computing Systems - Proceedings, Volume 127655, pp. 1779-1786, May 2017.
  27. Pingmei Xu, Yusuke Sugano, Andreas Bulling, “Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces”, 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, pp. 3299-3310, 2016.
  28. Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling, “Labelled pupils in the wild: A dataset for studying pupil detection in unconstrained environments”, 2016 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2016), pp. 139-142, 2016.
  29. Yusuke Sugano, Xucong Zhang, Andreas Bulling, “AggreGaze: Collective Estimation of Audience Attention on Public Displays”, UIST 2016: PROCEEDINGS OF THE 29TH ANNUAL SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, pp. 821-831, 2016.
  30. Mohsen Mansouryar, Julian Steil, Yusuke Sugano, Andreas Bulling, “3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers”, 2016 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS (ETRA 2016), pp. 197-200, 2016.
  31. Yusuke Sugano, Andreas Bulling, “Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency”, UIST'15: PROCEEDINGS OF THE 28TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, pp. 363-372, 2015.
  32. Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling, “Rendering of Eyes for Eye-Shape Registration and Gaze Estimation”, 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), pp. 3756-3764, 2015.
  33. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, “Appearance-Based Gaze Estimation in the Wild”, 2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pp. 4511-4520, 2015.
  34. Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, Andreas Bulling, “Rendering of Eyes for Eye-Shape Registration and Gaze Estimation”, 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), pp. 3756-3764, 2015.
  35. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Learning-by-synthesis for appearance-based 3D gaze estimation”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1821-1828, September 2014.
  36. Binbin Ye, Yusuke Sugano, Yoichi Sato, “Influence of stimulus and viewing task types on a learning-based visual saliency model”, Eye Tracking Research and Applications Symposium (ETRA), pp. 271-274, 2014.
  37. Isarun Chamveha, Yusuke Sugano, Yoichi Sato, Akihiro Sugimoto, “Social Group Discovery from Surveillance Videos: A Data-Driven Approach with Attention-Based Cues”, PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013, 2013.
  38. Yusuke Sugano, Kazuma Harada, Yoichi Sato, “Touch-consistent perspective for direct interaction under motion parallax”, ITS 2012 - Proceedings of the ACM Conference on Interactive Tabletops and Surfaces, pp. 339-342, 2012.
  39. Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Incorporating visual field characteristics into a saliency map”, Eye Tracking Research and Applications Symposium (ETRA), pp. 333-336, 2012.
  40. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Head Pose-Free Appearance-Based Gaze Sensing via Eye Image Synthesis”, 2012 21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012), pp. 1008-1011, 2012.
  41. Keisuke Ogaki, Kris M. Kitani, Yusuke Sugano, Yoichi Sato, “Coupling eye-motion and ego-motion features for first-person activity recognition”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 1-7, 2012.
  42. Feng Lu, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, “Inferring Human Gaze from Appearance via Adaptive Linear Regression”, 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), pp. 153-160, 2011.
  43. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Can Saliency Map Models Predict Human Egocentric Visual Attention?”, COMPUTER VISION - ACCV 2010 WORKSHOPS, PT I, Volume 6468, pp. 420-429, 2011.
  44. Kentaro Yamada, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, Kazuo Hiraki, “Attention Prediction in Egocentric Video Using Motion and Visual Saliency”, ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PT I, Volume 7087, pp. 277-+, 2011.
  45. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Appearance-Based Head Pose Estimation with Scene-Specific Adaptation”, 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), pp. 1713-1720, 2011.
  46. Feng Lu, Takahiro Okabe, Yusuke Sugano, Yoichi Sato, “A Head Pose-free Approach for Appearance-based Gaze Estimation”, PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, pp. 1-11, 2011.
  47. Isarun Chamveha, Yusuke Sugano, Daisuke Sugimura, Teera Siriteerakul, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, “Appearance-Based Head Pose Estimation with Scene-Specific Adaptation”, 2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), pp. 1713-1720, 2011.
  48. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, “Calibration-free gaze sensing using saliency maps”, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2667-2674, 2010.
  49. Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato, Hideki Koike, “An Incremental Learning Method for Unconstrained Gaze Estimation”, COMPUTER VISION - ECCV 2008, PT III, PROCEEDINGS, Volume 5304, pp. 656-+, 2008.
  50. Hiromichi Hashizume, Ayumu Kaneko, Yusuke Sugano, Koji Yatani, Masanori Sugimoto, “Fast and accurate positioning technique using ultrasonic phase accordance method”, TENCON 2005 - 2005 IEEE REGION 10 CONFERENCE, VOLS 1-5, pp. 826-+, 2006.