Tuesday, Nov 10, 2020. 12:00 noon - 01:00 PM ETLink to Zoom for Online Seminar.

Back to Seminar Schedule

Oriol Vinyals -- Model-free vs Model-based Reinforcement Learning

Abstract: In this talk, we will review model-free and model-based RL, two paradigms that have enabled global breakthroughs in AI research. This research included the ability to defeat professionals at the games of Go, Poker, StarCraft, or DOTA, and in other fields such as Robotics. Using the examples of the AlphaGo and AlphaStar agents, I'll present two approaches from these paradigms in RL and will conclude the talk by presenting some exciting new research directions that may unlock the power of model-based RL in a wider variety of environments, including stochastic, partial observable, with complex observation and action spaces.

Bio: Oriol Vinyals is a Principal Scientist at Google DeepMind and a team lead of the Deep Learning group. His work focuses on Deep Learning and Artificial Intelligence. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley, and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, Financial Times, WIRED, BBC, etc., and his articles have been cited over 90000 times. Some of his contributions such as seq2seq, knowledge distillation, or TensorFlow are used in Google Translate, Text-To-Speech, and Speech recognition, serving billions of queries every day, and he was the lead researcher of the AlphaStar project, creating an agent that defeated a top professional at the game of StarCraft, achieving Grandmaster level, also featured as the cover of Nature. At DeepMind he continues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning, and reinforcement learning.