The 7th International Workshop on Eye and Gaze in Computer Vision (GAZE 2026) at CVPR 2026 aims to encourage and highlight novel strategies for eye gaze estimation and prediction. The workshop topics include (but are not limited to):
- Foundation models and large-scale training for the eye and gaze.
- Gaze in egocentric vision, physical AI learning, and human–robot interaction.
- Understanding gaze in social interactions, human activities, and telepresence scenarios involving real or virtual agents and entities.
- Gaze estimation algorithms, including 3D gaze estimation, point-of-regard estimation, gaze following, gaze zone classification, etc.
- Detection and segmentation of the eye region, such as eye detection, pupil detection, eye-region landmark localization, etc.
- Human eye modeling and generation, including synthesis and animation from images or videos, etc.
- Eye gaze data collection, generation, and analysis, such as scanpath generation, etc.
- Applications of gaze tracking and analysis in real-world scenarios, including VR/AR, mobile devices, PCs, etc.
Call for Contributions
Full Workshop Papers
Submission: We invite authors to submit unpublished papers (8-page CVPR format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) on OpenReview.
Accepted papers will be published in the official CVPR Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.
Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.
Invited Keynote Speakers
University of Illinois Urbana-Champaign
James M. Rehg (pronounced “ray”) is a Founder Professor of Computer Science and Industrial and Enterprise Systems Engineering at University of Illinois Urbana-Champaign. Previously, he was a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he co-Directed the Center for Health Analytics and Informatics (CHAI). He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received an NSF CAREER award in 2001 and a Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005, BMVC 2010, Mobihealth 2014, Face and Gesture 2015, and a Distinguished Paper Award from ACM IMWUT and a Method of the Year award from the journal Nature Methods. Dr. Rehg served as the General co-Chair for CVPR 2009 and the Program co-Chair for CVPR 2017. He has authored more than 200 peer-reviewed scientific papers and holds 26 issued US patents.
Aarhus University
Ken Pfeuffer is a researcher, designer, and professor for future user interfaces at Aarhus University, where he is leading the Extended Interaction group that specializes in Human-Computer Interaction (HCI) and Spatial Computing for Extended Reality (XR). He has published over 75 scientific papers and received awards at ACM CHI, UIST, and SUI, including the ACM SIGCHI Special Recognition Award (2025). He is affiliated with the AI Danish Pioneer Center and COGAIN and is regularly in program committees for HCI and XR research conferences and journals. He earned his PhD from Lancaster University (UK), completed a postdoc at Bundeswehr University (Germany), and interned at Microsoft and Google Research US. He has pioneered interaction paradigms such as Gaze+Pinch and Direct-Indirect gestures, shaping 3D interfaces in emerging spatial computing technology.
University of Birmingham
NVIDIA Research
Delft University of Technology
ETH Zürich

EPFL & Idiap Research Institute

NVIDIA Research

Microsoft
EPFL & Idiap Research Institute
University of Birmingham
NVIDIA Research

Image credit to 
