Natural language generation is a key building block for human-computer communication, including tasks such as dialogue and summarization. Recent advances in neural network-based text generation opens up new opportunities in applications such as text style transfer and story generation. However, we do not have reliable domain-independent text generator yet. In this talk, I will discuss our recent work on exploiting structures for better natural language generation. I will start by introducing a new negotiation dialogue task and methods for separating strategy control from utterance generation. Then, I will talk about text style transfer by exploiting the locality of style markers. I will conclude with some analysis on LSTM language models
He He is an applied scientist at Amazon AWS in Palo Alto and will be starting as an assistant professor at NYU next Fall. Previously, she was a postdoctoral researcher at Stanford University. She earned her Ph.D. in Computer Science at the University of Maryland, College Park. She is interested in natural language processing and machine learning. Her research focuses on building intelligent agents that work in a changing environment and interact with people. Her current research focuses on dialogue and text generation.
Faculty Host: Yulia Tsvetkov
Refreshments at 4:00 pm in the 5th floor kitchen area.