Zhou Yu   俞舟

PhD Student

Language Technology Institute, Carnegie Mellon University

Address:

GHC 6605, 5000 Forbes Ave, Pittsburgh, PA 15213

zhouyu@cs.cmu.edu


Welcome

I am a PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky in LTI. 2015 summer and 2016 summer, I interned with Prof. David Suendermann-Oeft in ETS San Francisco Office on cloud based mulitmodal dialog systems. 2014 Fall, I interned with Dan Bohus and Eric Horvitz in Microsoft Research on situated multimodal dialogue systems.

Prior to CMU, I received a B.S. in Computer Science and a B.A. in English Language with Linguistics focus from Zhejiang University in 2011. I worked with Prof. Xiaofei He and Prof. Deng Cai there on Machine Learning and Computer Vision. I also worked with Prof. Yunhua Qu on Machine Translation for my English Langauge degree.


Research Interests

My research aims to leverage automatic obtainable multimodal information with machine learning methods to make conversations more nature and effective. The dynamics of both verbal and nonverbal behaviors of the conversational parties contribute to the process and outcome of the conversation. In order to understand human-human and human-dialog system interactions and improve the underlying model of the system, I design methods to predict conversation partners' attention and engagement in real time using both verbal and nonverbal behaviors, such as gaze and smiles. Then I leverage these signals to change system's conversatioanl strategies on the fly to accomendate users.


News

One long paper accepted in IVA 2016.

Two long papers accepted in SIGDIAL 2016. See you all in LA this Sep.

We just published our code and the collected database for our chatbot: TickTock. Here is the webpage just for TickTock. You can interact with it in a web service to get the first hand experience of a Chatbot. I am also co-organizing a share task on Chatbot, you can also participate in the share task here: webpage


Selected Publications

-Zhou Yu, Xinrui He, Alan W Black and Alexander Rudnicky, User Engagement Modeling in Virtual Agents Under Different Cultural Contexts, to appear IVA 2016.

-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Strategy and Policy Learning for Non-Task-Oriented Conversational Systems, to appear SIGDIAL 2016. [draft]

-Zhou Yu, Leah Nicolich-Henkin, Alan W Black and Alexander Rudnicky, A Wizard-of-Oz Study on A Non-Task-Oriented Dialog Systems that Reacts to User Engagement, to appear SIGDIAL 2016. [draft]

-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Chatbot evaluation and database expansion via crowdsourcing, In Proceedings of the RE-WOCHAT workshop of LREC,2016. [pdf]

-Sean Andrist, Dan Bohus, Zhou Yu, Eric Horvitz, Are You Messing with Me?: Querying about the Sincerity of Interactions in the Open World. HRI 2016. [pdf]

-Zhou Yu, Vikram Ramanarayanan, Robert Mundkowsky, Patrick Lange, Alan Black, Alexei Ivanov, David Suendermann-Oeft, Multimodal HALEF: An Open-Source Modular Web-Based Multimodal Dialog FrameworkMultimodal HALEF: An Open-Source Modular Web-Based Multimodal Dialog Framework, IWSDS 2016. [pdf]

-Alexei Ivanov, Patrick Lange, David Suendermann-Oeft, Vikram Ramanarayanan, Yao Qian, Zhou Yu and Jidong Tao, Speed vs. Accuracy: Designing an Optimal ASR System for Spontaneous Non-Native Speech in a Real-Time Application, to appear in IWSDS 2016. [pdf]

-Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao and Yao Qian, Using Bidirectional LSTM Recurrent Neural Networks to Learn High-Level Abstractions of Sequential Features for Automated Scoring of Non-Native Spontaneous Speech, to appear in ASRU 2015. [pdf]

-Zhou Yu, Dan Bohus and Eric Horvitz, Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent, SIGDIAL 2015. [pdf]

- Zhou Yu, Alexandros Papangelis, Alexander Rudnicky, TickTock: Engagement Awareness in a non-Goal-Oriented Multimodal Dialogue System, AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction 2015. [pdf][slides]

- Zhou Yu, Stefan Scherer, David Devault, Jonathan Gratch, Giota Stratou, Louis-Philippe Morency and Justine Cassell, Multimodal Prediction of Psychological Disorder: Learning Verbal and Nonverbal Commonality in Adjacency Pairs, SEMDIAL 2013. [pdf] [slides]

- Zhou Yu, David Gerritsen, Amy Ogan, Alan W Black, Justine Cassell, Automatic Prediction of Friendship via Multi-model Dyadic Features, SIGDIAL, 2013. [pdf]

- Zhou Yu, Deng Cai, Xiaofei He, Error-correcting Output Hashing in Fast Similar Search, Best paper in The Second International Conference on Internet Multimedia Computing and Service, ICIMCS Harbin, China,Dec.2010. [pdf]


Demo Videos

TickTock: an engagement aware multimodal dialog system
- below is a demo of a participant chatting with TickTock.

Direction-giving Robot: an attention aware direction-giving humanoid robot
- below is a demo and some real user cases of people interacting with the robot.

HALEF: a distributed web-based multimodal dialog system
- below is a demo of Zhou applying for a pizza delivery job. Users can access it through
a web browser. It live streams videos from usrs' local webcam to the server.


More about Zhou

- CV [pdf]