Visual Perception and Learning in an Open World

The 2nd workshop on Open World Vision

June 19, held in conjunction with CVPR 2022, New Orleans, Louisiana


Visual perception is indispensable for numerous applications, spanning transportation, healthcare, security, commerce, entertainment, and interdisciplinary research. Currently, visual perception algorithms are often developed in a closed-world paradigm, which assumes the data distribution and categorical labels are fixed a priori. This is unrealistic in the real open world, which contains situations that are dynamic, vast, and unpredictable. Algorithms developed in a closed world appear brittle once exposed to the complexity of the open world, where they are unable to properly adapt or robustly generalize to new scenarios. We are motivated to invite researchers to the workshop on Visual Perception and Learning in an Open World (VPLOW) where we have multiple speakers and three challenge competitions to cover a variety of topics. We hope our workshop stimulates fruitful discussions on the open-world research.

You might be interested in our previous workshops:


Topics of interest include, but are not limited to:

  • Open-world data: long-tailed distribution, open-set, unknowns, streaming data, biased data, unlabeled data, anomaly, multi-modality, etc.
  • Learning/problems: X-shot learning, Y-supervised learning, lifelong/continual learning, domain adaptation/generalization, open-world learning, etc.
  • Social Impact: safety, fairness, real-world applications, inter-disciplinary research, etc.
  • Misc: datasets, benchmarks, interpretability, robustness, generalization, etc.


Let's consider the following motivational examples.

  • Open-world data follows a long-tail distribution. Data tends to follow a long-tailed distribution and real-world tasks often emphasize the rarely-seen data. A model trained on such long-tailed data can perform poorly on rare or underrepresentative data. For example, a visual recognition model can misclassify underrepresented minorities and make unethical predictions (ref. case1, case2).
  • Open-world contains unknown examples. Largely due to the long-tail nature of data distribution, visual perception models are invariably confronted by unknown examples in the open world. Failing to detecting the unknowns can cause serious issues. For example, a Tesla Model 3 did not identify an unknown overturned truck and crashed into the truck (ref. case).
  • Open-world requires learning with evolving data, and labels. The world of interest is changing over time, e.g., driving scenes (in different cities and under different weather), the search engine ("apple" means different things today and 20 years ago). This says that the data distribution and semantics are continually changing and evolving. How to address distribution shifts and concept drifts?


Shu Kong
Carnegie Mellon University

Deepak Pathak
Carnegie Mellon University

Kristen Grauman
University of Texas at Austin

Walter J. Scheirer
University of Notre Dame

Judy Hoffman
Georgia Tech

Tinne Tuytelaars
K.U. Leuven

Thomas G. Dietterich
Oregon State University

Carl Vondrick
Columbia University


Please contact Shu Kong with any questions: shuk [at] andrew [dot] cmu [dot] edu

Shu Kong
Carnegie Mellon University

Yu-Xiong Wang
University of Illinois at Urbana-Champaign
Andrew Owens
University of Michigan

Deepak Pathak
Carnegie Mellon University

Carl Vondrick
Columbia University

Abhinav Shrivastava
University of Maryland

Deva Ramanan
Carnegie Mellon University

Terrance Boult
University of Colorado Colorado Springs

Challenge Organizers

Tanmay Gupta
Allen Institute for AI (AI2)

Derek Hoiem
University of Illinois at Urbana-Champaign
Aniruddha (Ani) Kembhavi
Allen Institute for AI (AI2)

Amita Kamath
Allen Institute for AI (AI2)

Yuqun Wu
University of Illinois at Urbana-Champaign
Ryan Marten
University of Illinois at Urbana-Champaign
Zhiqiu Lin
Carnegie Mellon University

Jia Shi
Carnegie Mellon University

Pulkit Kumar
University of Maryland

University of Maryland

Saketh Rambhatla
University of Maryland

Shihao Shen
Carnegie Mellon University

Siqi Zeng
Carnegie Mellon University

Important Dates

Please go to each challenge's website for details!
  • Submission deadline for Challenge-1: CLEAR: June 8, 2022 at 11:59pm PST.
  • Submission deadline for Challenge-2: GRIT: June 1, 2022 at 11:59pm PST.
  • Submission deadline for Challenge-3: ObjCLsDisc: June 10, 2022 at 11:59pm PST.
  • Workshop date: June 19, 2022

Oral presentation will be selected from challenge participants, e.g., winners and those having innovative ideas.


We organize three challenge competitions this year.
Please refer to the challenges' websites and ask challenge organizers if you have questions.

Program Schedule

This part will be updated shortly!

YouTube livestream url: url-place-holder.placeholder
Time (Central Time, UTC-5)
08:45 - 09:00
Opening remarks
Shu Kong CMU
Open-World Visual Perception
09:00 - 09:30
Invited talk #1 (virtual)
Tinne Tuytelaars K.U. Leuven
title tbd
09:30 - 10:00
Invited talk #2
Carl Vondrick Columbia University
title tbd
10:00 - 10:30
Invited talk #3
Judy Hoffman Georgia Tech
title tbd
10:30 - 10:45
Break / time buffer
10:45 - 11:15
Invited talk #4 (virtual)
Tomaso Poggio MIT
Thoughts on Real World Learning Theory
11:15 - 11:35
Challenge host's remark of CLEAR
CLEAR, institute
title tbd
11:35 - 12:05
invited teams for CLEAR
CLEAR-invited#1 institute
CLEAR-invited#2 institute
CLEAR-invited#3 institute
CLEAR-invited#4 institute
12:05 - 13:15
Lunch break
13:15 - 13:45
Invited talk #7
Thomas Dietterich Oregon State University
title tbd
13:45 - 14:15
Invited talk #8
Kristen Grauman, University of Texas at Austin
title tbd
14:15 - 14:45
Invited talk #5
Walter J. Scheirer University of Notre Dame
title tbd
14:45 - 15:15
Invited talk #6
Deepak Pathak CMU
title tbd
15:15 - 15:30
Break / time buffer
15:30 - 15:50
Host remark of Challenge-2 GRIT
GRIT, institute
title tbd
15:50 - 16:20
invited teams for GRIT
GRIT-invited#1 institute
GRIT-invited#2 institute
GRIT-invited#3 institute
GRIT-invited#4 institute
16:20 - 16:40
Host remark of Challenge-3 ObjCLsDisc
ObjCLsDisc, institute
title tbd
16:40 - 17:10
invited teams for ObjCLsDisc
17:10 - 17:15
Closing remarks