First International Workshop on Computer Vision for Autonomous Driving

In conjunction with ICCV 2013, Sydney, Australia December 2, 2013

 

ICCV logos iccvimg imgtandent imgcmu imgdaimler imgttic imgti

 

Call for Papers | Submission | Committees | Invited Talks | Best Paper Award | Program


US Lawmakers have recently passed legislation that allows fully autonomous vehicles to share public roads. With their potential to revolutionize the transport experience — and to improve road safety and traffic efficiency — there is a strong push by vehicle manufacturers and government agencies to bring autonomous to the broad market. The recent demonstrations at the DARPA Grand Challenges and by industry leaders has established that the core technical barrier to achieving autonomous vehicles is road scene understanding. However, although vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection, the use of computer vision in current autonomous vehicles is minimal. There is a perception that a wide gap exists between what is needed by the automotive industry to successfully deploy camera-based autonomous vehicles and what is currently possible using computer vision techniques.

The goal of this workshop is to bring together leaders from both academia and industry to determine the true extent of this gap, to identify the most relevant aspects of computer vision problems to solve, and to learn from others about proposed avenues and solutions. Within the scope of the workshop will be core computer vision tasks such as dynamic 3D reconstruction, pedestrian and vehicle detection, and predictive scene understanding — all required capabilities for an autonomous vehicle. In particular, we will cover (but not limit ourselves to) the following questions in this workshop:

  • Are current methods and representations adequate to hand over the wheel to computer vision algorithms?
  • Are current benchmarks and datasets sufficient to support the on-going research?
  • What are the mission-critical problems that need to be addressed by priority?

Call for Papers (pdf) 

SUBMISSION

Top

Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:

  • All papers must be written in English and submitted in PDF format.
  • Papers must be submitted online through the CMT submission system.
  • The maximum paper length is 8 pages. The workshop paper format guidelines are the same as the Main Conference papers.
  • Submissions will be rejected without review if they: contain more than 8 pages, violate the double-blind policy or violate the dual-submission policy.
  • All accepted papers will be allocated up to 8 pages in the proceedings and will be charged a flat fee of US$200. That is, there is no savings for using fewer than 8 pages. This fee helps cover the cost of the publishing process and will be paid after paper acceptance, at the time of registration for the conference.
  • Authors will be expected to have the opportunity to submit supporting material (space permitted).

The author kit provides a LaTeX2e and Word template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions.

A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file.

Important Dates

Submissions deadline:
Author notification:
Camera-ready:
Workshop:
September 10, 2013
October 1, 2013
October 10, 2013
December 2, 2013, Room 105
 

COMMITTEES

Top

General Chairs

Bart Nabbe
Yaser Sheikh
Tandent Vision Science, USA
Carnegie Mellon, USA

 

Program Chairs

Uwe Franke
Martial Hebert
Fernando De la Torre
Raquel Urtasun
Daimler AG, Germany
Carnegie Mellon, USA
Carnegie Mellon, USA
Toyota Technological Institute, USA
 

Program Committee

Ijaz Akhter
Mykhaylo Andriluka
Alper Ayvaci
Hernan Badino
Alexander Barth
Paulo Borges
Goksel Dedeoglu
Frank Dellaert
Andras Ferencz
Andreas Geiger
Abdelaziz Khiat
Sanjeev Koppal
Dirk Langer
Philip Lenz
Dan Levi
Jesse Levinson
Simon Lucey
Srinivasa Narasimhan
Michael Samples
Bernt Schiele
Jianbo Shi
Christoph Stiller
Wende Zhang
MPI, Tübingen, Germany
TU Darmstadt, Germany
Honda Research Institute, USA
NREC, USA
Daimler AG, USA
CSIRO Brisbane, Australia
Texas Instruments, USA
Georgia Tech., USA
Mobileye, USA
KIT, Germany
Nissan, Japan
Texas Instruments, USA
Volkswagen, USA
KIT, Germany
General Motors, USA
Stanford University, USA
CSIRO Brisbane, Australia
Carnegie Mellon, USA
Toyota, USA
Max Planck Institut Informatik, Germany
Upenn, USA
KIT, Germany
GM, USA

INVITED TALKS

Top
Uwe Franke
Srinivasa Narasimhan
Raquel Urtasun
Daimler AG, Germany
Carnegie Mellon University
University of Toronto

Making Bertha See
Uwe Franke, Daimler AG, Germany

Bio: Uwe Franke received the Ph.D. degree in electrical engineering from the Technical University of Aachen, Germany in 1988. Since 1989 he has been with Daimler Research and Development and has been constantly working on the development of vision based driver assistance systems. Since 2000 he has been head of Daimler’s Image Understanding Group and is a well known expert in real-time stereo vision and image understanding. Recent work is on optimal fusion of stereo and motion, called 6D-Vision. The stereo technology developed by his group is the basis for the stereo camera system of the new Mercedes S- and E-class vehicles introduced in 2013. Besides fully autonomous emergency breaking these cars offer autonomous driving in traffic jams.

Programmable Headlights: Smart and Safe Lighting Solutions for the Road Ahead
Srinivasa Narasimhan, Carnegie Mellon University, USA

Bio: Srinivasa Narasimhan is an Associate Professor in the Robotics Institute at Carnegie Mellon University. His group focuses on novel techniques for imaging, illumination and light transport to enable applications in vision, graphics, robotics and medical imaging. His works have received several awards: FORD URP Award (2013), Best Paper Runner up Prize (ACM I3D 2013), Best Paper Honorable Mention Award (IEEE ICCP 2012), Best Paper Award (IEEE PROCAMS 2009), the Okawa Research Grant (2009), the NSF CAREER Award (2007), Adobe Best Paper Award (IEEE Workshop on Physics based methods in computer vision, ICCV 2007) and IEEE Best Paper Honorable Mention Award (IEEE CVPR 2000). He is the co-inventor of smart headlights which made several top-10 lists of promising technologies including Car and Driver and Edmunds. He is also the co-inventor of Aqualux 3D display, Assorted-pixels and Motion-aware cameras and low-power outdoor 'kinect'. He co-chaired the International Symposium on Volumetric Scattering in Vision and Graphics in 2007, the IEEE Workshop on Projector-Camera Systems (PROCAMS) in 2010, and the IEEE International Conference on Computational Photography (ICCP) in 2011, is co-editing a special journal issue on Computational Photography in 2013, and serves on the editorial board of the International Journal of Computer Vision.

Visual Scene Understanding for Autonomous Systems
Raquel Urtasun, University of Toronto

Bio: Raquel Urtasun is an Assistant Professor at the University of Toronto. Previously she was an Assistant Professor at TTI-Chicago a philanthropically endowed academic institute located in the campus of the University of Chicago. She was a visiting professor at ETH Zurich during the spring semester of 2010. Before that, she was a postdoctoral research scientist at UC Berkeley and ICSI and a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Raquel Urtasun completed her PhD at the Computer Vision Laboratory, at EPFL, Switzerland in 2006 working with Pascal Fua and David Fleet at the University of Toronto. She has been area chair of multiple learning and vision conferences (i.e., NIPS, UAI, ICML, ICCV, CVPR, ECCV), and served in the committee of numerous international computer vision and machine learning conferences. Her major interests are statistical machine learning and computer vision, with a particular interest in non-parametric Bayesian statistics, latent variable models, structured prediction and their application to semantic scene understanding.

 

BEST PAPER AWARD

Top

There will be a best paper award recommended during the peer review by the program committee and selected by the workshop chairs. The winner will receive a recognition certificate and a check for $500 USD sponsored by Tandent Vision Science.

 

PROGRAM

Top

09:10-09:20 Opening notes from Workshop Organizers
09:20-10:00 Invited Talk: Making Bertha See, Uwe Franke (Daimler AG)
10:00-10:30 Coffee Break
10:30-11:00 Visual Odometry by Multi-frame Feature Integration, Akihiro Yamamoto, Hernan Badino, Takeo Kanade
11:00-11:30 Integrated Pedestrian and Direction Classification using a Random Decision Forest, Junli Tao, Reinhard Klette
11:30-12:10 Invited Talk: Programmable Headlights: Smart and Safe Lighting Solutions for the Road Ahead, Srinivasa Narasimhan
12:10-12:40 Priors for Stereo Vision under Adverse Weather Conditions, Stefan Gehrig, Maxim Reznitskii, Nicolai Schneider, Uwe Franke
12:40-14:30 Lunch
14:30-15:00 Spatio-Temporal Good Features to Track, Christoph Feichtenhofer, Axel Pinz
15:00-15:40 Invited Talk: Visual Scene Understanding for Autonomous Systems, Raquel Urtasun
15:40-16:10 Coffee Break
16:10-17:10 Panel Discussion: Are computer vision algorithms ready to take the wheel? Leaders of industry and academia will discuss avenues of research to pursue.
17:10-17:15 Best paper award (Sponsored by Tandent) and raffle (Sponsored by Texas Instruments)