The Panoptic Studio: A Massively Multiview System for
Social Motion Capture (in ICCV 2015)

Hanbyul Joo, Hao Liu, Lei Tan, Lin Gui, Bart Nabbe
Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh

Carnegie Mellon University

Notes

  • Our brand new Panoptic Studio dataset is open: Panoptic Studio dataset website.
  • An extended version of the method is available in arXiv (currently submitted to a journal).
  • The following page is describing our work published on ICCV 2015.

Abstract

We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction. The core challenges in capturing social interactions are: (1) occlusion is functional and frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; and (3) human appearance and con- figuration variation is immense. The Panoptic Studio is a system organized around the thesis that social interactions should be measured through the perceptual integration of a large variety of view points. We present a modularized system designed around this principle, consisting of integrated structural, hardware, and software innovations. The system takes, as input, 480 synchronized video streams of multiple people engaged in social activities, and produces, as output, the labeled time-varying 3D structure of anatomical landmarks on individuals in the space. The algorithmic contributions include a hierarchical approach for generating skeletal trajectory proposals, and an optimization framework for skeletal reconstruction with trajectory reassociation.

Publication

Panoptic Studio: A Massively Multiview System for Social Motion Capture
Hanbyul Joo, Hao Liu, Lei Tan, Lin Gui, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, Yaser Sheikh
In ICCV 2015. (Oral Presentation)
[Paper(PDF)] [Supplementary Material] [Slide(pdf)] [BibTex]

Oral Talk


Videos

Social Motion Capture (Joo et al., ICCV 2015)



Acknowledgements

This research is supported by the National Science Foundation under Grants No. 1353120 and 1029679, and in part using an ONR grant 11628301