SCS Faculty Candidate Talk
- Gates&Hillman Centers
- ASA Conference Room 6115
- GABOR ANGELI
- Ph.D. Candidate
- Department of Computer Science
- Stanford University
Learning Open Domain Knowledge From Text
This talk describes a new technique for learning open-domain knowledge from unstructured web-scale text corpora. I first show how to capture common sense facts: given a candidate statement about the world and a large corpus of known facts, is the statement likely to be true? We appeal to a probabilistic relaxation of natural logic -- a logic which uses the syntax of natural language as its logical formalism -- to define a search problem from the query statement to its appropriate support in the knowledge base over valid (or approximately valid) logical inference steps. We show a 4x improvement at retrieval recall compared to lemmatized lookup, maintaining above 90% precision.
I extend this approach to handle longer, more complex premises by segmenting these utterance into a set of atomic statements entailed through natural logic. We evaluate this system in isolation by using it as the main component in an Open Information Extraction system, and show that it achieves a 3% absolute improvement in F1 compared to prior work on a competitive knowledge base population task.
Finally, I address how to elegantly handle situations where we could not find a supporting premise for our query. To address this, we create an analogue of an evaluation function in gameplaying search: a shallow lexical classifier is folded into the search program to serve as a heuristic function to assess how likely we would have been to find a premise. Results on answering 4th grade science questions show that this method improves over both the classifier in isolation, a strong IR baseline, and prior work.
Gabor Angeli is a Ph.D. candidate at Stanford graduating this year, working in the Stanford Natural Language Processing Group under Professor Chris Manning. He graduated with honors from UC Berkeley in 2010. He has worked on a wide range of natural language processing tasks, including natural language generation and semantic parsing; his recent work is primarily on relation extraction and natural language inference. He has published eight papers at the top three NLP conferences.
Faculty Host: Eduard Hovy