Visual Perception and Learning in an Open World

The 2nd workshop on Open World Vision

June 19, held in conjunction with CVPR 2022, New Orleans, Louisiana


hybrid workshop

place: 206-207, New Orleans Ernest N. Morial Convention Center

NOTE: we will not use the zoom link provided by CVPR; we livestream the workshop on YouTube: https://youtu.be/q4ypT-33puU


Overview

Visual perception is indispensable for numerous applications, spanning transportation, healthcare, security, commerce, entertainment, and interdisciplinary research. Currently, visual perception algorithms are often developed in a closed-world paradigm, which assumes the data distribution and categorical labels are fixed a priori. This is unrealistic in the real open world, which contains situations that are dynamic, vast, and unpredictable. Algorithms developed in a closed world appear brittle once exposed to the complexity of the open world, where they are unable to properly adapt or robustly generalize to new scenarios. We are motivated to invite researchers to the workshop on Visual Perception and Learning in an Open World (VPLOW) where we have multiple speakers and three challenge competitions to cover a variety of topics. We hope our workshop stimulates fruitful discussions on the open-world research.

You might be interested in our previous workshops:


Topics

Topics of interest include, but are not limited to:

  • Open-world data: long-tailed distribution, open-set, unknowns, streaming data, biased data, unlabeled data, anomaly, multi-modality, etc.
  • Learning/problems: X-shot learning, Y-supervised learning, lifelong/continual learning, domain adaptation/generalization, open-world learning, etc.
  • Social Impact: safety, fairness, real-world applications, inter-disciplinary research, etc.
  • Misc: datasets, benchmarks, interpretability, robustness, generalization, etc.

Examples

Let's consider the following motivational examples.

  • Open-world data follows a long-tail distribution. Data tends to follow a long-tailed distribution and real-world tasks often emphasize the rarely-seen data. A model trained on such long-tailed data can perform poorly on rare or underrepresentative data. For example, a visual recognition model can misclassify underrepresented minorities and make unethical predictions (ref. case1, case2).
  • Open-world contains unknown examples. Largely due to the long-tail nature of data distribution, visual perception models are invariably confronted by unknown examples in the open world. Failing to detecting the unknowns can cause serious issues. For example, a Tesla Model 3 did not identify an unknown overturned truck and crashed into the truck (ref. case).
  • Open-world requires learning with evolving data, and labels. The world of interest is changing over time, e.g., driving scenes (in different cities and under different weather), the search engine ("apple" means different things today and 20 years ago). This says that the data distribution and semantics are continually changing and evolving. How to address distribution shifts and concept drifts?

Speakers


Thomas G. Dietterich
Oregon State University

Kristen Grauman
University of Texas at Austin

Judy Hoffman
Georgia Tech

Deepak Pathak
Carnegie Mellon University

Walter J. Scheirer
University of Notre Dame

Tinne Tuytelaars
K.U. Leuven

Carl Vondrick
Columbia University



Organizers

Please contact Shu Kong with any questions: shuk [at] andrew [dot] cmu [dot] edu


Shu Kong
Carnegie Mellon University

Yu-Xiong Wang
University of Illinois at Urbana-Champaign
Andrew Owens
University of Michigan

Carl Vondrick
Columbia University

Abhinav Shrivastava
University of Maryland

Deva Ramanan
Carnegie Mellon University

Terrance Boult
University of Colorado Colorado Springs

Challenge Organizers


Tanmay Gupta
Allen Institute for AI (AI2)

Derek Hoiem
University of Illinois at Urbana-Champaign
Aniruddha (Ani) Kembhavi
Allen Institute for AI (AI2)

Amita Kamath
Allen Institute for AI (AI2)

Yuqun Wu
University of Illinois at Urbana-Champaign
Ryan Marten
University of Illinois at Urbana-Champaign
Zhiqiu Lin
Carnegie Mellon University

Jia Shi
Carnegie Mellon University

Pulkit Kumar
University of Maryland

Anubav
University of Maryland

Saketh Rambhatla
University of Maryland

Shihao Shen
Carnegie Mellon University

Siqi Zeng
Carnegie Mellon University





Important Dates

Please go to each challenge's website for details and any updated dates!
  • Submission deadline for Challenge-1: CLEAR: June 8, 2022 at 11:59pm PST.
  • Submission deadline for Challenge-2: GRIT: June 10, 2022 at 11:59pm PST.
  • Submission deadline for Challenge-3: ObjClsDisc: June 15, 2022 at 11:59pm PST.
  • Workshop date: June 19, 2022

Oral presentation will be selected from challenge participants, e.g., winners and those having innovative ideas.


Competition

We organize three challenge competitions this year.
Please refer to the challenges' websites and ask challenge organizers if you have questions.


Program Schedule

YouTube livestream url: https://youtu.be/q4ypT-33puU
Time (Central Time, UTC-5)
Event
Title/Presenter
Links
08:45 - 09:00
Opening remarks (virtual)
Shu Kong CMU
Visual Perception and Learning in an Open World
09:00 - 09:30
Invited talk #1 (virtual)
Tinne Tuytelaars K.U. Leuven
Some thoughts on continual learning
09:30 - 10:00
Invited talk #2
Carl Vondrick Columbia University
Inverting the Neural Networks
10:00 - 10:30
Invited talk #3
Judy Hoffman Georgia Tech
The Complexity of Bas in Vision Systems
10:30 - 10:45
Break / time buffer
10:45 - 11:15
Invited talk #4 (virtual)
Tomaso Poggio MIT
Thoughts on Real World Learning Theory
11:15 - 11:35
Host remark of Challenge-1 CLEAR
Zhiqiu Lin CMU
CLEAR: Challenge of Continual Learning on Real-World Imagery
11:35 - 11:42
CLEAR Presentation #1
Ge Liu SJTU
Improving Model Generalization by Contrasting Features across Domains
11:42 - 11:49
CLEAR Presentation #2
Xiaojun Tang, Pan Zhong, Tingting Wang, Yuzhou Peng BOE Technology Group
Adaptive Loss for Better Model Generalization in Real World
11:49 - 11:57
CLEAR Presentation #3
Jiawei Dong, Mengwen Du, Shuo Wang AI Prime
Comprehensive Studies on Sampling, Architecture and Augmentation Strategies
11:57 - 12:05
CLEAR Presentation #4
Xinkai Gao, Bo Ke, Sunan He, Ruizhi Qiao Tencent, YouTu Lab
Bucket-Aware Sampling Strategy for Efficient Replay
12:05 - 13:15
Lunch break
13:15 - 13:45
Invited talk #7 (virtual)
Thomas Dietterich Oregon State University
Further Studies of the Familiarity Hypothesis for Deep Novelty Detection
13:45 - 14:15
Invited talk #8
Kristen Grauman, University of Texas at Austin
From naming to using: agents in the open world
14:15 - 14:45
Invited talk #5
Walter J. Scheirer University of Notre Dame
A Unifying Framework for Formal Theories of Novelty in Visual Perception
14:45 - 15:15
Invited talk #6
Deepak Pathak CMU
Open-World Vision for Robot Learning in the Wild
15:15 - 15:30
Break / time buffer
15:30 - 15:35
Host remark of Challenge-2 GRIT
Derek Hoiem UIUC
Overview of GPV and GRIT
15:35 - 15:55
Host remark of Challenge-2 GRIT
Tanmay Gupta, Ryan Marten AI2, UIUC
The GRIT Benchmark
15:55 - 16:00
GRIT presentation
Amita Kamath Stanford, AI2
GPV 1
16:00 - 16:10
GRIT presentation
Chris Clark UW, AI2
GPV 2
16:10 - 16:20
GRIT presentation
Jiasen Lu AI2
Unified-IO
16:20 - 16:40
Host remark of Challenge-3 ObjClsDisc
Abhinav Shrivastava, UMD
Reviving Object Discovery: Where from & Where to? A teaser challenge on Object Class Discovery
16:40 - 17:10
invited teams for ObjClsDisc
Yuhong Yang Tsinghua University & OPPO Research Institute
Team MIG

Yiqi Zou Sun Yat-sen University
Team Amadeus

Archana Swaminathan UMD
Team UMD
17:10 - 17:15
Closing remarks


Sponsors