Statistical 3D human body models have helped us to better understand human shape and motion and already enabled exciting new applications. However, if we want to learn detailed, personalized, and clothed models of human shape, motion, and dynamics, we require new approaches that learn from ubiquitous data such as plain RGB-images and video. I will discuss recent advances in personalized body shape and clothing estimation from monocular video, from a few frames, and even from a single image. We developed effective methods to learn detailed avatars without the need for expensive scanning equipment. These methods are easy to use and enable personalized avatar creation for example for VR and AR applications. I will conclude my talk by outlining the next challenges in human shape reconstruction.
Thiemo Alldieck obtained his Master’s degree from Aalborg University (Denmark) in Vision, Graphics and Interactive Systems in 2015. Since 2016 he is a Ph.D. student at the Computer Graphics Lab at TU Braunschweig (Germany). Currently, he is a research intern at Facebook Reality Labs, Pittsburgh. In 2018 he was a visiting Ph.D. in the Real Virtual Humans and Graphics Vision & Video groups at Max-Planck Institute for Informatics, Saarbrücken (Germany). He works in on-going close cooperation with the groups on the topic of monocular human shape reconstruction. His work has been published in various computer vision conferences including CVPR, ICCV, and 3DV.
The VASC seminar is supported in part by Facebook Reality Labs Pittsburgh