Statistics-based Motion Synthesis for Social Conversations

Yanzhe Yang, Carnegie Mellon University
Jimei Yang, Adobe Research
Jessica Hodgins, Carnegie Mellon University

Proceedings of the 2020 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 2020)

Abstract

Plausible conversations among characters are required to generate the ambiance of social settings such as a restaurant, hotel lobby, or cocktail party. In this paper, we propose a motion synthesis technique that can rapidly generate animated motion for characters in these settings. Our system synthesizes gestures and other body motions of dyadic conversations that synchronize with novel input audio clips. Human conversations feature many different forms of coordination and synchronization. For example, speakers use hand gestures to emphasize important points, and listeners often nod in agreement or acknowledgment. To achieve the desired degree of realism, our method first constructs a motion graph that preserves the statistics of a database of recorded conversations performed by a pair of actors. This graph is then used to search for a motion sequence that respects three forms of audio-motion coordination in human conversations: coordination to phoneme clause, listener response, and partner's hesitation pause. We assess the quality of the generated animations through a user study that compares them to the originally recorded motion and evaluate the effects of each type of audio-motion coordination via ablation studies.

Paper

SCA Paper (preprint, pdf, 9 MB)

Talk (20 min)

Talk Abstract (3 min)

BibTex

Data

Dyadic Conversation (255 MB)

Video

Supplementary Material (151 MB)