SCS
Student
Seminar
Series

go to the list of abstracts
abstracts

go to the list of previous talks
previous talks

go to the list of other SCS seminars
scs seminars

go to the SCS home page
SCS

go to the CMU home page
CMU

     

The Next Talk Sp'19 Talks General Info Speaking Req't

Motion Synthesis of Conversations for Background Characters

Friday, April 12th, 2019 from 12-1 pm in GHC 6501.

Yanzhe Yang, CSD

Social scenes are common in many video games and movies. To realistically recreate such scenes in computer graphics, it is critical to animate the conversing human characters. A social scene typically contains foreground characters and background characters. While foreground characters are the focus of the scene and thus are carefully created by artists, the sole purpose of background characters is to render the atmosphere and to add realism to the environment. However, with traditional content creation tools, the artists often need to spend as much time animating the background characters as the foreground characters, despite that the exact behavior of the background characters is not critical to the experience -- they only need behave naturally so that they do not detract the audience. In this talk, I will introduce a system I have developed that helps artists to rapidly generate the animations for the talking characters in the background. The system automatically generates the body motions for two talking characters from an audio recording of a conversation. In order to produce natural looking animations, the system must ensure that the characters’ body motions are smooth and are synchronized with the rhythm of the audio. For example, a speaker often uses hand gestures as they are stating an important point, and a listener will nod to acknowledge what the speaker is saying.

My talk will start with how we captured/recorded a database of real conversations and how we studied the statistics of the synchrony between body motion and audio signals from the database, then focus on the key algorithm of our system that generates novel motion sequences from an input audio based on the captured data. I will conclude my talk with results from a user study that demonstrates the effectiveness of our system.

Based on joint work with Jimei Yang and Jessica Hodgins.

In Partial Fulfillment of the Speaking Requirement.


Spring 2019 Schedule
Mon, Jan 14 GHC 6501 Expired
Fri, Jan 18 GHC 6501 Expired
Mon, Jan 21 GHC 6501 Expired
Fri, Jan 25 GHC 6501 Expired
Mon, Jan 28 GHC 6501 Expired
Fri, Feb 1 GHC 6501 Expired
Mon, Feb 4 GHC 6501 Expired
Fri, Feb 8 GHC 6501 Ziqiang Feng Edge-based Discovery of Training Data for Machine Learning
Mon, Feb 11 GHC 6501 Expired
Fri, Feb 15 GHC 6501 Expired
Mon, Feb 18 GHC 6501 Expired
Fri, Feb 22 GHC 6501 Expired
Mon, Feb 25 GHC 6501 AVAILABLE
Fri, Mar 1 GHC 6501 AVAILABLE
Mon, Mar 4 GHC 6501 AVAILABLE
Fri, Mar 8 GHC 6501 AVAILABLE
Mon, Mar 11 GHC 6501 AVAILABLE
Fri, Mar 15 GHC 6501 AVAILABLE
Mon, Mar 18 GHC 6501 AVAILABLE
Fri, Mar 22 GHC 6501 AVAILABLE
Mon, Mar 25 GHC 6501 AVAILABLE
Fri, Mar 29 GHC 6501 Daehyeok Kim Booked, reserved for other purposes, or not available
Mon, Apr 1 GHC 6501 AVAILABLE
Fri, Apr 5 GHC 6501 AVAILABLE
Mon, Apr 8 GHC 6501 AVAILABLE
Fri, Apr 12 GHC 6501 Yanzhe Yang Motion Synthesis of Conversations for Background Characters
Mon, Apr 15 GHC 6501 AVAILABLE
Fri, Apr 19 GHC 6501 AVAILABLE
Mon, Apr 22 GHC 6501 AVAILABLE
Fri, Apr 26 GHC 6501 AVAILABLE
Mon, Apr 29 GHC 6501 AVAILABLE
Fri, May 3 GHC 6501 AVAILABLE
Mon, May 6 GHC 6501 AVAILABLE
Fri, May 10 GHC 6501 Talib Aghayev Booked, reserved for other purposes, or not available


General Info

The Student Seminar Series is an informal research seminar by and for SCS graduate students from noon to 1 pm on Mondays and Fridays. Lunch is provided by the Computer Science Department (personal thanks to Debbie Cavlovich!). At each meeting, a different student speaker will give an informal, 40-minute talk about his/her research, followed by questions/suggestions/brainstorming. We try to attract people with a diverse set of interests, and encourage speakers to present at a very general, accessible level.

So why are we doing this and why take part? In the best case scenario, this will lead to some interesting cross-disciplinary work among people in different fields and people may get some new ideas about their research. In the worst case scenario, a few people will practice their public speaking and the rest get together for a free lunch.


Guideline & Speaking Requirement Need-to-Know

Note: Step #1 below are applicable to all SSS speakers. You can schedule AT MOST THREE talks per semester.

SSS is an ideal forum for SCS students to give presentations that count toward fulfilling their speaking requirements. The specifics, though, vary with each department. For instance, students in CSD will need to be familiar with the notes in Section 8 of the Ph.D. document and follow the instructions outlined on the Speakers Club homepage. Roughly speaking, these are the steps:

  1. Schedule a talk with SSS by sending your name, department name, your talk title, talk abstract (including additional info like "Joint work with..." or "In Partial Fulfillment of the Speaking Requirement"), and a link to your home page to sss@cs at least TWO WEEKS before your scheduled talk.
  2. After you are confirmed with your SSS slot, go to the Speakers Club Calendar and schedule your talk at least THREE WEEKS in advance of the talk date.
  3. On the day of your talk, make sure you print Speakers Club evaluation forms for your evaluators to use.
Students outside of CSD will need to check with their respective departments regarding the procedure. As another example, ISRI students fulfill their speaking requirements by attending a semesterly Software Research Seminar and giving X number of presentations per school year. If you have experience with your department that might help others in your department, please feel free to contribute your knowledge by emailing us. Thank you!


SSS Coordinators

Qing Zheng, CSD

 


Web contact: sss+www@cs