Natural language has inherent structure. Words compose with one another to form hierarchical structures to convey meaning. These compositional structures are ubiquitous in all levels of language. Despite the recent, enormous success of deep neural networks in NLP, capturing such discrete, combinatorial structure remains challenging. In this talk, I will present two directions towards an integration of deep learning and language structure. First, we will see how language structure can be used as a rich source of prior knowledge to improve language modeling and representation learning. Second, we will explore how advances in model parameterization and inference, in particular deep learning, can be used as a computational tool to discover linguistic structure from raw text.
Yoon Kim is a fifth-year PhD student at Harvard University, advised by Alexander Rush. His research is at the intersection of natural language processing and machine learning, and he is the recipient of a Google AI PhD Fellowship.
Faculty Host: Graham Neubig
Language Technologies Institute