Zhou Yu 俞舟
Zhou (pronounced similar to Jo)
Language Technology Institute, Carnegie Mellon University
GHC 6605, 5000 Forbes Ave, Pittsburgh, PA 15213
I am a PhD student at the Language Technology Institute under School of Computer Science, Carnegie Mellon University, working with Prof. Alan W Black and Prof. Alexander I. Rudnicky. 2015 summer and 2016 summer, I interned with Prof. David Suendermann-Oeft at ETS San Francisco Office on cloud based mulitmodal dialog systems. 2014 Fall, I interned with Dan Bohus and Eric Horvitz at Microsoft Research on situated multimodal dialogue systems.
Prior to CMU, I received a B.S. in Computer Science and a B.A. in Linguistics from Zhejiang University in 2011. I worked with Prof. Xiaofei He and Prof. Deng Cai there on Machine Learning and Computer Vision. I also worked with Prof. Yunhua Qu on Machine Translation.
I design algorithms for real-time intelligent interactive systems that coordinate with user actions that are beyond spoken languages, including non-verbal behaviors to achieve effective and natural communications. In particular, I optimize human-machine communication via studies of multimodal sensing and analysis, speech and natural language processing, machine learning and human-computer interaction. The central focus of my dissertation research is to bring together all the areas above to design, implement and deploy end-to-end real-time interactive intelligent systems that are able to plan globally considering interaction history and current user actions to achieve better user experience and task performance. Meanwhile, I enjoy collaborating with researchers with different backgrounds on interdisciplinary research in all area of science, such as health care, education and robotics.
I am on the academic job market.
Apr. 27th, I will be giving a talk at University of Cambridge.
Apr. 25th, I will be giving a talk at Heriot-Watt University.
Apr. 24th, I will be giving a talk at University of Edinburgh.
Mar. 30th, I will be giving a talk at Rochester.
Mar. 28th, I will be giving a talk at Rutgers.
Mar. 21st, I will be giving a talk at Facebook FAIR, NYC.
Mar. 6th, I will be giving a talk at UC Davis.
Feb. 24th, I will be giving a talk at UC Merced.
Feb. 9th, I will be giving a talk at University of Pennsylvania.
Jan. 31st, I will be giving a talk at University of Colorado, Boulder.
Jan. 26th, I will attend the Amazon Student Symposium in Seattle and give an oral presentation.
Jan. 19th, I will be giving a talk at University of Washington.
Jan. 18th, I will be giving a talk at Allen Institute for Artificial Intelligence (AI2).
Jan. 13th, I will be giving a talk at Ohio State University (OSU).
Our Team, CMU Magnus was selected to participate in Amazon Alexa Prize Challenge with a $100,000 stipend and other supports from Amazon! Congrats to us all! webpage
Please try our chatbot: TickTock. Here is the webpage
A human-chatbot conversation database. Here is the webpage
Upcoming Talk: Algorithms and Systems for Social Chatbots, in Duolingo Nov.8
-Zhou Yu, Alan W Black and Alexander I. Rudnicky, Learning Conversational Systems that Interleave Task and Non-Task Content, IJCAI 2017 [pdf]
-Zhou Yu, Xinrui He, Alan W Black and Alexander I. Rudnicky, User Engagement Modeling in Virtual Agents Under Different Cultural Contexts, IVA 2016.
-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Strategy and Policy Learning for Non-Task-Oriented Conversational Systems, SIGDIAL 2016. [pdf]
-Zhou Yu, Leah Nicolich-Henkin, Alan W Black and Alexander Rudnicky, A Wizard-of-Oz Study on A Non-Task-Oriented Dialog Systems that Reacts to User Engagement, SIGDIAL 2016. [pdf]
-Zhou Yu, Ziyu Xu, Alan W Black and Alexander Rudnicky, Chatbot evaluation and database expansion via crowdsourcing, In Proceedings of the RE-WOCHAT workshop of LREC, 2016. [pdf]
-Sean Andrist, Dan Bohus, Zhou Yu, Eric Horvitz, Are You Messing with Me?: Querying about the Sincerity of Interactions in the Open World. HRI 2016. [pdf]
-Zhou Yu, Vikram Ramanarayanan, Robert Mundkowsky, Patrick Lange, Alan Black, Alexei Ivanov, David Suendermann-Oeft, Multimodal HALEF: An Open-Source Modular Web-Based Multimodal HAL, IWSDS 2016. [pdf]
-Alexei Ivanov, Patrick Lange, David Suendermann-Oeft, Vikram Ramanarayanan, Yao Qian, Zhou Yu and Jidong Tao, Speed vs. Accuracy: Designing an Optimal ASR System for Spontaneous Non-Native Speech in a Real-Time Application, IWSDS 2016. [pdf]
-Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao and Yao Qian, Using Bidirectional LSTM Recurrent Neural Networks to Learn High-Level Abstractions of Sequential Features for Automated Scoring of Non-Native Spontaneous Speech, ASRU 2015. [pdf]
-Zhou Yu, Dan Bohus and Eric Horvitz, Incremental Coordination: Attention-Centric Speech Production in a Physically Situated Conversational Agent, SIGDIAL 2015. [pdf]
- Zhou Yu, Alexandros Papangelis, Alexander Rudnicky, TickTock: Engagement Awareness in a non-Goal-Oriented Multimodal Dialogue System, AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction 2015. [pdf][slides]
- Zhou Yu, Stefan Scherer, David Devault, Jonathan Gratch, Giota Stratou, Louis-Philippe Morency and Justine Cassell, Multimodal Prediction of Psychological Disorder: Learning Verbal and Nonverbal Commonality in Adjacency Pairs, SEMDIAL 2013. [pdf] [slides]
- Zhou Yu, David Gerritsen, Amy Ogan, Alan W Black, Justine Cassell, Automatic Prediction of Friendship via Multi-model Dyadic Features, SIGDIAL, 2013. [pdf]
- Zhou Yu, Deng Cai, Xiaofei He, Error-correcting Output Hashing in Fast Similar Search, Best paper in The Second International Conference on Internet Multimedia Computing and Service, ICIMCS Harbin, China,Dec.2010. [pdf]
TickTock: a multimodal chatbot with user engagement coordination
- below is a demo of using automatically generated conversational strategy to improve user engagement.
Direction-giving Robot: a direction-giving humanoid robot with user attention coordination
- below is a demo and some real user cases of people interacting with the robot.
HALEF: a distributed web-based multimodal dialog system with user engagement coordination
- below is a demo of a Amazon Turker interacting with our job interview training application via a web browser. It live-streams videos from usrs' local webcam to the server.