Can we solve language understanding tasks without relying on task-specific annotated data? This could be important in scenarios where the inputs range across various domains and it is expensive to create annotated data.
I discuss two different language understanding problems (Question Answering and Entity Typing) which have traditionally relied on on direct supervision. For these problems, I present two recent works where exploiting properties of the underlying representations and indirect signals help us move beyond traditional paradigms. And as a result, we observe better generalization across domains.
Daniel Khashabi is a recent PhD graduate from the University of Pennsylvania, under the supervision of Prof. Dan Roth. His interests lie at the intersection of artificial intelligence and natural language processing, with the ultimate goal of improving natural language “understanding” & broadening its applications. He has published on natural language processing and artificial intelligence in prestigious conferences such as AAAI and NAACL. He is the co-organizer of Student Research Workshop at ACL 2019.
Faculty Host: Scott Fahlman
Language Technologies Institute