Introductory Overview Lecture
The Deep Learning Revolution


JSM 2018 Tutorial

2:00pm − 3:40pm, Sunday July 29, 2018


Schedule


2:00 - 2:25
Part I: Introduction to Deep Learning (Chris) —

2:25 - 2:50
Part II: Optimization, Regularization (Russ) —

2:50 - 3:15
Part III: Unsupervised Learning, Deep Generative Models (Russ) —

3:15 - 3:40
Part IV: Extended Neural Net Architectures and Applications (Chris) —



Abstract

Deep Learning---broadly speaking, a class of methods based on many-layer neural networks---has witnessed an absolute explosion of interest in Machine Learning in recent years. It has proven to be an extremely useful tool in applications in computer vision, natural language processing, robotics and control, and many other areas. Even apart from these settings, many would argue that Deep Learning is the best "black-box, off-the-shelf" prediction method available. Should Statisticians now be using Deep Learning for everything? Is this "black-box" really so easy to use, and moreover, can it be opened? Is there room for Statisticians to contribute to the understanding of and/or further development of Deep Learning "models"?

This Introductory Overview Lecture provides a comprehensive overview of some of the most popular/powerful Deep Learning methods, details their application in various data settings, and addresses the questions raised above. Talks will be given by Chris Manning and Ruslan Salakhutdinov, two of the foremost researchers today in Deep Learning. The session will be split into 4 parts, giving by Chris and Ruslan in alternating in fashion.