Language Technologies Institute Colloquium

  • SAM BOWMAN
  • Assistant Profsesor
  • Center for Data Science and Department of Linguistics
  • New York University
Colloquium

Sentence Understanding with Neural Networks and Natural Language Inference

This talk will discuss two lines of work involving general-purpose neural network sentence encoders: learned functions that map natural language sentences to vectors (or sets of vectors) that are meant to capture their meanings in machine-readable form.

The bulk of the talk will focus SNLI and MultiNLI, two new datasets for the task of natural language inference (aka recognizing textual entailment), in which a model must read two sentences and evaluate whether the first sentence entails or contradicts the second. These datasets make it possible to evaluate in a uniquely direct way the degree to which sentence encoding-based models understand language, and also—with nearly one million examples—offer a valuable data source for pretraining. The talk will close with some discussion of another open problem in sentence understanding—the role of syntactic parse trees in neural network-based modeling—and will present some results on models that attempt to learn a parser using only the supervision signal supplied by a downstream semantic task, and with no access to parsed training data data.

Sam Bowman is a second year assistant professor at New York University, appointed in the Center for Data Science and the Department of Linguistics. He is the co-director of the Machine Learning for Language group and the CILVR applied machine learning lab. He completed a PhD in Linguistics in 2016 at Stanford University with Chris Manning and Chris Potts, and undergraduate and master's degrees in Linguistics at the University of Chicago. Sam has also spent time at Google Brain, TTI-Chicago, and Johns Hopkins University, and received a 2017 Google Faculty Research Award.

Sam's research focuses on the goal of building artificial neural network models for problems in natural language understanding, and includes work on model design, data collection, evaluation, unsupervised learning, and transfer learning.

For More Information, Please Contact: 
Keywords: