Image credit to Daniel Hannah


Introduction

The 3rd International Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2021) at CVPR 2021 aims to encourage and highlight novel strategies for eye gaze estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, both spatially and temporally. This is expected to be achieved by applying novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):

  • Reformulating eye detection, gaze estimation, and gaze prediction pipelines with deep networks.
  • Applying geometric and anatomical constraints into the training of (sparse or dense) deep networks.
  • Leveraging additional cues such as contexts from face region and head pose information.
  • Developing adversarial methods to deal with conditions where current methods fail (illumination, appearance, etc.).
  • Exploring attention mechanisms to predict the point of regard.
  • Designing new accurate measures to account for rapid eye gaze movement.
  • Novel methods for temporal gaze estimation and prediction including Bayesian methods.
  • Integrating differentiable components into 3D gaze estimation frameworks.
  • Robust estimation from different data modalities such as RGB, depth, head pose, and eye region landmarks.
  • Generic gaze estimation method for handling extreme head poses and gaze directions.
  • Temporal information usage for eye tracking to provide consistent gaze estimation on the screen.
  • Personalization of gaze estimators with few-shot learning.
  • Semi-/weak-/un-/self- supervised leraning methods, domain adaptation methods, and other novel methods towards improved representation learning from eye/face region images or gaze target region images.
We will be hosting 3 invited speakers and holding 2 deep learning challenges for the topic of gaze estimation. We will also be accepting the submission of full unpublished papers as done in previous versions of the workshop. These papers will be peer-reviewed via a double-blind process, and will be published in the official workshop proceedings and be presented at the workshop itself. More information will be provided as soon as possible.


Call for Contributions


Full Workshop Papers

Submission: We invite authors to submit unpublished papers (8-page ICCV format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this CMT link.

Accepted papers will be published in the official ICCV Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.

Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.


GAZE 2021 Challenges

The GAZE 2021 Challenges are hosted on Codalab, and can be found at:

More information on the respective challenges can be found on their pages.


Important Dates


ETH-XGaze & EVE Challenges Released February 13, 2021
Paper Submission Deadline March 29, 2021 (23:59 Pacific time)
Paper Reviews Deadline April 9, 2021 (tentative)
Notification to Authors April 13, 2021 (tentative)
Camera-Ready Deadline April 20, 2021
ETH-XGaze & EVE Challenges Closed May 28, 2021 (23:59 UTC)
Finalized Workshop Program Mid May, 2021 (tentative)


Organizers

Hyung Jin Chang
University of Birmingham
Xucong Zhang
ETH Zürich
Seonwook Park
Lunit Inc.
Shalini De Mello
NVIDIA Research


Qiang Ji
Rensselaer Polytechnic Institute
Otmar Hilliges
ETH Zürich
Aleš Leonardis
University of Birmingham


Workshop sponsored by: