- Dariu Gavrila, (TU Delft) Title: Intelligent Vehicles that (Fore) See Abstract: Sensors are meanwhile very good at measuring 3D in the context of environment perception for intelligent vehicles - witness the high-resolution and accurate (stereo) cameras, radars and lidars that are about to enter the production vehicles in the coming years. Visual object detection has also made big strides. Whereas "bounding-box" detectors were originally designed individually for certain object classes (e.g. traffic signs, cars, pedestrians), the last 2-3 years have seen the emergence of powerful holistic approaches, based on scene labeling and deep learning. Time has now come to focus on the next frontier: anticipating dynamic objects in traffic. The potential benefits are large, such as earlier and more effective system reactions in dangerous situations, or more comfortable and energy-efficient ways of controlling the ego-vehicle in automated mode. To reap these benefits, however, it is necessary to identify/extract intent-relevant features and to apply more sophisticated dynamical (behavior) models, specific to the object class detected. Ideally, these features and models are automatically learned from large amounts of data and incorporate scene context. In this talk, I provide an overview of our previous work on visual pedestrian intent recognition and path prediction. The described systems rely to a varying degree on additional features and context. My talk concludes with a brief research outlook and a presentation of our new Intelligent Vehicles group at TU Delft.
- Petros Kapsalas, (Panasonic) Title: Geo-Spatial Localization and Mapping in ADAS, State of the Art & Challenges
- Branislav Kisacanin, (NVIDIA) Title: Deep Learning for Autonomous Driving: SW and HW for Development and Deployment Abstract: In this invited talk we will present the latest software and hardware tools for development and deployment of computationally demanding Deep Learning Networks for autonomous driving. From DGX-1 supercomputer for training DLNs, to Drive PX 2, a scalable AI car computer platform, and looking into the future, to the recently announced Xavier, an AI supercomputer on a chip for the future autonomous vehicles, tools are now available for autonomous driving DLN development and deployment. To illustrate how the tools are being used already, we will show some of the most recent results of Nvidia's own, end-to-end DLN for autonomous driving.
- Joakim Lin-Sörstedt, (Volvo) Title: Sensor fusion for self-driving cars Abstract: Volvo Cars will play a leading role in the world's first large-scale autonomous driving pilot project in which 100 self-driving Volvo cars will use public roads in everyday driving conditions around the Swedish city of Gothenburg. A self-driving car makes use of multiple sensors, such a radars, cameras and lasers, to gather information about the surrounding environment. This information is combined in a sensor fusion system to both estimate the global position of the ego vehicle, and the distance to and properties of surrounding objects. In this presentation, the focus is on how different robustness aspects influence the design of such a sensor fusion system. For instance, these systems must be able to handle different types of sensor errors, hardware faults and a dynamic and unpredictable traffic environment.
|8:50 – 9:00||Start of workshop|
|9:00 – 9:45||Dariu Gavrila, TU Delft|
|9:45 – 10:30||Petros Kapsalas, Panasonic|
|10:30 – 12:00||Poster session / break|
|12:00 – 14:00||Lunch break|
|14:00 – 14:45||Branislav Kisacanin, NVIDIA|
|14:45 – 15:30||Joakim Lin-Sorstedt, Volvo Cars|
|15:30 – 16:45||Poster session / break|
Topics of Interest
Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments, and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques. The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the large gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to):
- Prediction and modeling of road scenes and scenarios
- Semantic labeling, object detection and recognition in road scenes
- Dynamic 3D reconstruction, SLAM and ego-motion estimation
- Visual feature extraction, classification and tracking
- Design and development of robust and real-time architectures
- Use of emerging sensors (e.g., multispectral, RGB-D, LIDAR and LADAR)
- Fusion of RGB imagery with other sensing modalities
- Interdisciplinary contributions across computer vision, optics, robotics and other related fields.
We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive 3 double blind reviews, which will be moderated by the workshop chairs.
- Submission Deadline: July 12th (Extended!).
- Notification of Acceptance: July 21.
- Camera-ready Deadline: July 25.
- Workshop: October 9.
- Jose Alvarez (NICTA, Australia)
- Mathieu Salzmann (EPFL, Switzerland)
- Lars Petersson (NICTA, Australia)
- Fredrik Kahl (Chalmers University of Technology)
- Bart Nabbe (Faraday Future, USA)
Papers should describe original and unpublished work about the above or closely related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:
- All papers must be written in English and submitted in PDF format.
- Papers must be submitted online through the CMT submission system. The submission site is: https://cmt.research.microsoft.com/CVRSUAD2016.
- The maximum paper length is 12 pages. Note that shorter submissions are also welcome. The workshop paper format guidelines are the same as the Main Conference papers.
- Submissions will be rejected without review if they: contain more than 12 pages (excluding references), violate the double-blind policy or violate the dual-submission policy. The author kit provides a LaTeX2e template for submissions, and an example paper to demonstrate the format. Please refer to this example for detailed formatting instructions.
- A paper ID will be allocated to you during submission. Please replace the asterisks in the example paper with your paper's own ID before uploading your file. More detailed instructions can be found at the main conference website.