Image credit to Nano Banana Pro


Introduction

The 7th International Workshop on Eye and Gaze in Computer Vision (GAZE 2026) at CVPR 2026 aims to encourage and highlight novel strategies for eye gaze estimation and prediction. The workshop topics include (but are not limited to):

  • Foundation models and large-scale training for the eye and gaze.
  • Gaze in egocentric vision, physical AI learning, and human–robot interaction.
  • Understanding gaze in social interactions, human activities, and telepresence scenarios involving real or virtual agents and entities.
  • Gaze estimation algorithms, including 3D gaze estimation, point-of-regard estimation, gaze following, gaze zone classification, etc.
  • Detection and segmentation of the eye region, such as eye detection, pupil detection, eye-region landmark localization, etc.
  • Human eye modeling and generation, including synthesis and animation from images or videos, etc.
  • Eye gaze data collection, generation, and analysis, such as scanpath generation, etc.
  • Applications of gaze tracking and analysis in real-world scenarios, including VR/AR, mobile devices, PCs, etc.
We will be hosting 2 invited speakers and will also be accepting the submission of full unpublished papers as done in previous versions of the workshop. These papers will be peer-reviewed via a double-blind process, and will be published in the official workshop proceedings and be presented at the workshop itself.


Call for Contributions


Full Workshop Papers

Submission: We invite authors to submit unpublished papers (8-page CVPR format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) on OpenReview.

Accepted papers will be published in the official CVPR Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.

Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.



Important Dates


Paper Submission Deadline March 7, 2026 (23:59 Pacific time)
Notification to Authors March 25, 2026
Camera-Ready Deadline April 10, 2026


Invited Keynote Speakers


James M. Rehg
University of Illinois Urbana-Champaign
Biography (click to expand/collapse)

James M. Rehg (pronounced “ray”) is a Founder Professor of Computer Science and Industrial and Enterprise Systems Engineering at University of Illinois Urbana-Champaign. Previously, he was a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he co-Directed the Center for Health Analytics and Informatics (CHAI). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a Distinguished Paper Award from ACM IMWUT and a Method of the Year award from the journal Nature Methods. Dr. Rehg served as the General co-Chair for CVPR 2009 and the Program co-Chair for CVPR 2017. He has authored more than 200 peer-reviewed scientific papers and holds 26 issued US patents.

Ken Pfeuffer
Aarhus University
Biography (click to expand/collapse)

Ken Pfeuffer is a researcher, designer, and professor for future user interfaces at Aarhus University, where he is leading the Extended Interaction group that specializes in Human-Computer Interaction (HCI) and Spatial Computing for Extended Reality (XR). He has published over 75 scientific papers and received awards at ACM CHI, UIST, and SUI, including the ACM SIGCHI Special Recognition Award (2025). He is affiliated with the AI Danish Pioneer Center and COGAIN and is regularly in program committees for HCI and XR research conferences and journals. He earned his PhD from Lancaster University (UK), completed a postdoc at Bundeswehr University (Germany), and interned at Microsoft and Google Research US. He has pioneered interaction paradigms such as Gaze+Pinch and Direct-Indirect gestures, shaping 3D interfaces in emerging spatial computing technology.

Organizers



Yihua Cheng
University of Birmingham
Seonwook Park
NVIDIA Research
Xucong Zhang
Delft University of Technology
Xi Wang
ETH Zürich
Hengfei Wang
EPFL & Idiap Research Institute
Michael Stengel
NVIDIA Research


David Wong
Microsoft
Jean-Marc Odobez
EPFL & Idiap Research Institute
Aleš Leonardis
University of Birmingham
Shalini De Mello
NVIDIA Research
Hyung Jin Chang
University of Birmingham


Workshop sponsored by: