Language Technologies Institute Colloquium

  • VERED SHWARTZ
  • Post-doctoral Researcher
  • Allen Institute for Artificial Intelligence (AI2)
  • Paul G. Allen School of Computer Science & Engineering, University of Washington
Colloquium

What do I know? Pushing the boundaries of existing world knowledge

Natural language understanding often requires non-trivial reasoning that relies on pre-existing knowledge to fill in any gaps. Most of the current NLP models are built upon pre-trained language models (LMs), which serve as a representation layer and often the sole source of world knowledge. In this talk I will propose a generic framework for NLP tasks that attempts to make the most of general knowledge captured by pre-trained LMs, by actively querying them for more information. The model employes a "discovery learning" approach that draws on existing knowledge to discover new truths, and does not require additional supervision. I will conclude by discussing the tension between the dual role of LMs: as knowledge base approximators vs. as meaning representors. Can these two types of knowledge co-exist? We will look into motivating examples such as drawing general knowledge about people named Alice and Jason as opposed to when they are named Donald and Hillary.

Vered Shwartz is a postdoctoral researcher at the Allen Institute for Artificial Intelligence (AI2) and the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Previously, she completed her PhD in Computer Science from Bar-Ilan University, under the supervision of Prof. Ido Dagan. Her research interests include lexical semantics, multiword expressions, and commonsense reasoning.

For More Information, Please Contact: 
Keywords: