One of the holy grails of AR/VR is a system for telepresence that feels indistinguishable from face to face interactions. A key technology for enabling this is the ability to create digital doubles; representations of humans that are indistinguishable in how they look and move from the real thing. Facebook Reality Labs in Pittsburgh has been working on automating the creation of digital doubles and their animation during social interactions in VR. In this talk, I will give an overview of some of the technology behind our system and outline directions for future work.
Jason Saragih is a Director Research Scientist at Facebook Reality Labs (FRL). He works at the intersection of graphics, computer vision, and machine learning, specializing in human modeling. He received his Bachelors in Mechatronics and PhD in Computer Science from the Australian National University in 2004 and 2008 respectively. Prior to joining FRL in 2015, Jason developed computer vision systems for the mobile AR industry. He has also worked as a Post Doc at CMU and as a research scientist at CSIRO where he developed face tracking and modeling technologies.
The AI Seminar is generously sponsored by Apple.