Language Technologies Institute Colloquium

  • YEJIN CHOI
  • Associate Professor
  • Paul G. Allen School of Computer Science & Engineering
  • University of Washington
Colloquium

From Naive Physics to Connotation: Learning about the World from Language

Intelligent communication requires reading between the lines, which in turn, requires rich background knowledge about how the world works. However, learning unspoken commonsense knowledge from language is nontrivial, as people rarely state the obvious, e.g., "my house is bigger than me." In this talk, I will discuss how we can recover the trivial everyday knowledge just from language without an embodied agent. A key insight is this: the implicit knowledge people share and assume systematically influences the way people use language, which provides indirect clues to reason about the world. For example, if "Jen entered her house’", it must be that her house is bigger than her.

In this talk, I will first present how we can organize various aspects of commonsense — ranging from naive physics knowledge to more pragmatic connotations — by adapting representations of frame semantics. I will then discuss neural network approaches that complement the frame-centric approaches. I will conclude the talk by discussing the challenges in current models and formalisms, pointing to avenues for future research.

Yejin Choi’s primary research interests are in the fields of Natural Language Processing, Machine Learning, Artificial Intelligence, with broader interests in Computer Vision and Digital Humanities.

Language and X {vision, mind, society...}: Intelligent communication requires the ability to read between the lines and to reason beyond what is said explicitly. Her recent research has been under two broad themes: (i) learning the contextual, grounded meaning of langauge from various contexts in which language is used — both physical (e.g., visual) and abstract (e.g., social, cognitive), and (ii) learning the background knowledge about how the world works, latent in large-scale multimodal data. More specifically, her research interests include:

  • Language Grounding with Vision: Learning semantic correspondences between language and vision at a very large scale, addressing tasks such as image captioning, multimodal knowledge learning, and reasoning.
  • Procedural Language: Learning to interpret instructional language (e.g., cooking recipes) as action diagrams, and learning to compose a coherent natural language instruction that accomplishes a given goal and an agenda.
  • Knowledge and Reasoning: Statistical learning of commonsense knowledge from large-scale multimodal data, for example, learning physical properties (e.g., size) of common objects.
  • Language Generation: Situated language generation, conversation, storytelling, integrating multimodality and stochastic knowledge about actions, events, and affects.
  • Connotation and Intention: Statistical models to infer the communicative goals and the (hidden) intent of the author, e.g., deceptive intent, by learning statistical regularities in how something is said (form & style) in addition to what is said (content).

Instructor: Graham Neubig

For More Information, Please Contact: 
Keywords: