7 papers have been accepted in our GAZE2022 workshop.
Congratulations to all authors!
This year, June 19 and 20 marks Juneteenth, a US holiday commemorating the end of slavery in the US, and a holiday of special significance in the US South. We encourage attendees to learn more about Juneteenth and its historical context, and to join the city of New Orleans in celebrating the Juneteenth holiday. You can find out more information about Juneteenth here: https://cvpr2022.thecvf.com/recognizing-juneteenth
The 4th International Workshop on Gaze Estimation and Prediction in the Wild (GAZE 2022) at CVPR 2022 aims to encourage and highlight novel strategies for eye gaze estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, both spatially and temporally. This is expected to be achieved by applying novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):
- Reformulating eye detection, gaze estimation, and gaze prediction pipelines with deep networks.
- Applying geometric and anatomical constraints into the training of (sparse or dense) deep networks.
- Leveraging additional cues such as contexts from face region and head pose information.
- Developing adversarial methods to deal with conditions where current methods fail (illumination, appearance, etc.).
- Exploring attention mechanisms to predict the point of regard.
- Designing new accurate measures to account for rapid eye gaze movement.
- Novel methods for temporal gaze estimation and prediction including Bayesian methods.
- Integrating differentiable components into 3D gaze estimation frameworks.
- Robust estimation from different data modalities such as RGB, depth, head pose, and eye region landmarks.
- Generic gaze estimation method for handling extreme head poses and gaze directions.
- Temporal information usage for eye tracking to provide consistent gaze estimation on the screen.
- Personalization of gaze estimators with few-shot learning.
- Semi-/weak-/un-/self- supervised leraning methods, domain adaptation methods, and other novel methods towards improved representation learning from eye/face region images or gaze target region images.
Call for Contributions
Full Workshop Papers
Submission: We invite authors to submit unpublished papers (8-page CVPR format) to our workshop, to be presented at a poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at this CMT link.
Accepted papers will be published in the official CVPR Workshops proceedings and the Computer Vision Foundation (CVF) Open Access archive.
Note: Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers' comments (named as previous_reviews.pdf
) and a letter of changes (named as letter_of_changes.pdf
) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.
Workshop Schedule
Time in UTC | Start Time in UTC* (probably your time zone) |
Item |
---|---|---|
1:30pm - 1:35pm | 20 Jun 2022 13:30:00 UTC | Opening remark |
1:35pm - 2:15pm | 20 Jun 2022 13:35:00 UTC | Invited talk by Prof. Wei Shen |
2:15pm - 2:55pm | 20 Jun 2022 14:15:00 UTC | Invited talk by Prof. Gordon Wetzstein |
2:55pm - 3:00pm | 20 Jun 2022 14:55:00 UTC | Invited poster spotlight talk |
3:00pm - 4:00pm | 20 Jun 2022 15:00:00 UTC | Coffee break & poster presentation |
4:00pm - 5:10pm | 20 Jun 2022 16:00:00 UTC | Workshop paper presentation |
5:10pm - 5:50pm | 20 Jun 2022 17:10:00 UTC | Panel discussion |
5:50pm - 6:00pm | 20 Jun 2022 17:50:00 UTC | Award & closing remark |
For example, those in Los Angeles may see UTC-7,
while those in Berlin may see UTC+2.
Please note that there may be differences to your actual time zone.
Invited Keynote Speakers
Stanford University
Eye Tracking Revisited: Applications in Rendering, Displays, Wearable Computing Systems and Emerging Event-based Eye Tracking
Abstract
In this talk, we will discuss several emerging applications of eye tracking in AR/VR displays, including ocular parallax rendering and gaze-contingent stereo rendering, as well as wearable computing systems, for example autofocal eyeglasses for presbyopia correction. Moreover, we will discuss emerging technologies for ultra-low-latency eye tracking beyond 10,000 Hz using event sensors.
Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, computational optics, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist at MIT, he received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, an Alain Fournier Ph.D. Dissertation Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.
Shanghai Jiao Tong University
Detecting Gaze Targets from Images Captured in the Wild
Abstract
Gaze target detection, i.e., gaze following, plays a crucial role in human-scene interaction understanding tasks. In this talk, I will introduce our recent works on detecting gaze targets from images captured in natural settings, including (1) a dual attention mechanism to combine field-of-view attention and depth attention to locate gaze targets, (2) a 3D sight line guided dual-pathway framework to detect gaze targets in 360-degree images, and (3) a vision transformer to enable end-to-end gaze following. The excellent results on several gaze following datasets show the potential of our methods to increase the scalability of naturalistic gaze measurement.
Wei Shen is an Associate Professor at the Artificial Intelligence Institute, Shanghai Jiao Tong University, since October 2020. Before that, he was an Assistant Research Professor at the Department of Computer Science, Johns Hopkins University. His research interests lie in the fields of computer vision, machine learning, and deep learning, particularly in object detection and segmentation, representation learning and human-centered computer vision. He is the recipient of three Natural Science Foundation of China grants. He is an area chair for CVPR 2022 and ACCV 2022, a senior program committee member for AAAI 2022 and an associate editor for Neurocomputing.
Accepted Full Papers
Invited Posters
Program Committee
Hong Kong University of Science and Technology
Pupil Labs Research
Delft University of Technology
Universität Stuttgart
Beihuang University
University of Stuttgart
University of Birmingham
The University of Tokyo
Stony Brook University
Institute of Computing Technology, Chinese Academy of Sciences
NVIDIA Research
ETH Zurich
Idiap Research Institute and EPFL
University of Tuebingen
Rensselaer Polytechnic Institute
Monash University
Amazon
Delft University of Technology
University of California, Santa Barbara
TU Delft
University of Birmingham
The University of Tokyo
University of Birmingham
Delft University of Technology
NVIDIA Research
Rensselaer Polytechnic Institute
ETH Zürich
University of Birmingham
University of Birmingham
Please contact me if you have any question about this website.
Email: hxw080@student.bham.ac.uk