Image credit to Nika_Akin

Monday Morning, 20th June 2022 (half-day)

7 papers have been accepted in our GAZE2022 workshop.
Congratulations to all authors!

This year, June 19 and 20 marks Juneteenth, a US holiday commemorating the end of slavery in the US, and a holiday of special significance in the US South. We encourage attendees to learn more about Juneteenth and its historical context, and to join the city of New Orleans in celebrating the Juneteenth holiday. You can find out more information about Juneteenth here: https://cvpr2022.thecvf.com/recognizing-juneteenth



Introduction

The 4th International Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2022) at CVPR 2022 aims to encourage and highlight novel strategies for eye gaze estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, both spatially and temporally. This is expected to be achieved by applying novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):

  • Reformulating eye detection, gaze estimation, and gaze prediction pipelines with deep networks.
  • Applying geometric and anatomical constraints into the training of (sparse or dense) deep networks.
  • Leveraging additional cues such as contexts from face region and head pose information.
  • Developing adversarial methods to deal with conditions where current methods fail (illumination, appearance, etc.).
  • Exploring attention mechanisms to predict the point of regard.
  • Designing new accurate measures to account for rapid eye gaze movement.
  • Novel methods for temporal gaze estimation and prediction including Bayesian methods.
  • Integrating differentiable components into 3D gaze estimation frameworks.
  • Robust estimation from different data modalities such as RGB, depth, head pose, and eye region landmarks.
  • Generic gaze estimation method for handling extreme head poses and gaze directions.
  • Temporal information usage for eye tracking to provide consistent gaze estimation on the screen.
  • Personalization of gaze estimators with few-shot learning.
  • Semi-/weak-/un-/self- supervised leraning methods, domain adaptation methods, and other novel methods towards improved representation learning from eye/face region images or gaze target region images.
We will be hosting 2 invited speakers for the topic of gaze estimation. We will also be accepting the submission of full unpublished papers as done in previous versions of the workshop. These papers will be peer-reviewed via a double-blind process, and will be published in the official workshop proceedings and be presented at the workshop itself. More information will be provided as soon as possible.


Call for Contributions


Full Workshop Papers

Submission: We invite authors to submit unpublished papers (8-page CVPR format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this CMT link.

Accepted papers will be published in the official CVPR Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.

Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.



Important Dates


Paper Submission Deadline March 24, 2022 (12:00 Pacific time)
Notification to Authors April 11, 2022
Camera-Ready Deadline April 20, 2022


Workshop Schedule


Time in UTC Start Time in UTC*
(probably your time zone)
Item
1:30pm - 1:35pm 20 Jun 2022 13:30:00 UTC Opening remark
1:35pm - 2:15pm 20 Jun 2022 13:35:00 UTC Invited talk by Prof. Wei Shen
2:15pm - 2:55pm 20 Jun 2022 14:15:00 UTC Invited talk by Prof. Gordon Wetzstein
2:55pm - 3:00pm 20 Jun 2022 14:55:00 UTC Invited poster spotlight talk
3:00pm - 4:00pm 20 Jun 2022 15:00:00 UTC Coffee break & poster presentation
4:00pm - 5:10pm 20 Jun 2022 16:00:00 UTC Workshop paper presentation
5:10pm - 5:50pm 20 Jun 2022 17:10:00 UTC Panel discussion
5:50pm - 6:00pm 20 Jun 2022 17:50:00 UTC Award & closing remark
* This time is calculated to be in your computer's reported time zone.
For example, those in Los Angeles may see UTC-7,
while those in Berlin may see UTC+2.

Please note that there may be differences to your actual time zone.


Invited Keynote Speakers


Gordon Wetzstein
Stanford University

Eye Tracking Revisited: Applications in Rendering, Displays, Wearable Computing Systems and Emerging Event-based Eye Tracking


Abstract

In this talk, we will discuss several emerging applications of eye tracking in AR/VR displays, including ocular parallax rendering and gaze-contingent stereo rendering, as well as wearable computing systems, for example autofocal eyeglasses for presbyopia correction. Moreover, we will discuss emerging technologies for ultra-low-latency eye tracking beyond 10,000 Hz using event sensors.

Biography (click to expand/collapse)

Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, computational optics, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist at MIT, he received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, an Alain Fournier Ph.D. Dissertation Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.


Wei Shen
Shanghai Jiao Tong University

Detecting Gaze Targets from Images Captured in the Wild


Abstract

Gaze target detection, i.e., gaze following, plays a crucial role in human-scene interaction understanding tasks. In this talk, I will introduce our recent works on detecting gaze targets from images captured in natural settings, including (1) a dual attention mechanism to combine field-of-view attention and depth attention to locate gaze targets, (2) a 3D sight line guided dual-pathway framework to detect gaze targets in 360-degree images, and (3) a vision transformer to enable end-to-end gaze following. The excellent results on several gaze following datasets show the potential of our methods to increase the scalability of naturalistic gaze measurement.

Biography (click to expand/collapse)

Wei Shen is an Associate Professor at the Artificial Intelligence Institute, Shanghai Jiao Tong University, since October 2020. Before that, he was an Assistant Research Professor at the Department of Computer Science, Johns Hopkins University. His research interests lie in the fields of computer vision, machine learning, and deep learning, particularly in object detection and segmentation, representation learning and human-centered computer vision. He is the recipient of three Natural Science Foundation of China grants. He is an area chair for CVPR 2022 and ACCV 2022, a senior program committee member for AAAI 2022 and an associate editor for Neurocomputing.



Awards

Best Paper Award sponsored by


Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation
Jiawei Qin, Takuru Shimoyama, Yusuke Sugano

PDF (CVF) Video

Best Poster Award sponsored by


Unsupervised Multi-View Gaze Representation Learning
John Gideon, Shan Su, Simon Stent

PDF (CVF) Video




Accepted Full Papers

Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation Jiawei Qin, Takuru Shimoyama, Yusuke Sugano Best Paper Award
Self-Attention with Convolution and Deconvolution for Efficient Eye Gaze Estimation from a Full Face Image Jun O Oh, Hyung Jin Chang, Sang-Il Choi
PDF (CVF) Video
Unsupervised Multi-View Gaze Representation Learning John Gideon, Shan Su, Simon Stent Best Poster Award
PDF (CVF) Video
ScanpathNet: A Recurrent Mixture Density Network for Scanpath Prediction Ryan Anthony J de Belen, Tomasz Bednarz, Arcot Sowmya Best Paper Honourable Mention
One-Stage Object Referring with Gaze Estimation Jianhang Chen, Xu Zhang, Yue Wu, Shalini Ghosh, Pradeep Natarajan, Shih-Fu Chang, Jan Allebach
PDF (CVF) Video
Characterizing Target-absent Human Attention Yupei Chen, Zhibo Yang, Souradeep Chakraborty, Sounak Mondal, Seoyoung Ahn, Dimitris Samaras, Minh Hoai, Gregory Zelinsky
A Modular Multimodal Architecture for Gaze Target Prediction: Application to Privacy-Sensitive Settings Anshul Gupta, Samy Tafasca, Jean-Marc Odobez
PDF (CVF) Video


Invited Posters

Dynamic 3D Gaze from Afar: Deep Gaze Estimation from Temporal Eye-Head-Body Coordination Soma Nonaka, Shohei Nobuhara, Ko Nishino


Program Committee

Jimin Pi
Hong Kong University of Science and Technology
Kai Dierkes
Pupil Labs Research
Xucong Zhang
Delft University of Technology
Yao Wang
Universität Stuttgart
Yihua Cheng
Beihuang University
Mihai Bace
University of Stuttgart
Hengfei Wang
University of Birmingham
Yusuke Sugano
The University of Tokyo
Seoyoung Ahn
Stony Brook University
Jiabei Zeng
Institute of Computing Technology, Chinese Academy of Sciences
Shalini De Mello
NVIDIA Research
Xi Wang
ETH Zurich
Rémy Siegfried
Idiap Research Institute and EPFL
Wolfgang Fuhl
University of Tuebingen
Chenyi Kuang
Rensselaer Polytechnic Institute
Shreya Ghosh
Monash University
Guohao Lan
Delft University of Technology
Ekta Prashnani
University of California, Santa Barbara
Hyung Jin Chang
University of Birmingham
Jiawei Qin
The University of Tokyo


Organizers



Hyung Jin Chang
University of Birmingham
Xucong Zhang
Delft University of Technology
Shalini De Mello
NVIDIA Research

Qiang Ji
Rensselaer Polytechnic Institute
Otmar Hilliges
ETH Zürich
Aleš Leonardis
University of Birmingham

Website Chair



Hengfei Wang
University of Birmingham


Please contact me if you have any question about this website.
Email: hxw080@student.bham.ac.uk

Workshop sponsored by: