Machine Learning Thesis Proposal

  • Gates Hillman Centers
  • Reddy Conference Room 4405
  • MARIYA TONEVA
  • Ph.D. Student
  • Machine Learning Department
  • Carnegie Mellon University
Thesis Proposals

Bridging Language in Machines with Language in the Brain

The advent of neural networks for natural language processing (NLP) has resulted in models that are able to capture complex language-relevant information, generating excitement both for NLP applications as well as for investigating neuroscientific questions about how the brain processes language. However, these neural NLP models are often trained without explicit language rules and, despite some progress in interpreting their representations, they are still largely a black box with respect to what they learn about language. In this thesis, we propose to make progress towards bridging the understanding of how machines and brains process language by 1) examining how previously established knowledge about one can help study the other, and 2) building a brain-aligned NLP model with benefits to both NLP and neurolinguistics.

(Machines -> Brains) Recent NLP models can provide contextual representations of words, inspiring neuroscientists to use these rich representations to study how the brain processes language in context. However, different representations derived from a neural NLP model relate to each other in complex ways. For example, a representation from a middle layer of an NLP model, that was derived based on a sequence of words, contains some information about the token-level representations of these words. These relationships between different NLP representations may confound the interpretation of the relationships between the individual NLP representations and the brain. We present a method of disentangling the NLP representations of context from the representations of single words. We show that this leads to new discoveries about language processing in the brain and new insights into how two common brain recording methods -- fMRI and MEG -- capture contextual processing.

(Brains -> Machines) We next present a method that uses prior neurolinguistic evidence to evaluate the presence of specific brain-relevant information in the NLP model representations. The method presents the same text word-by-word to a person in a neuroimaging device and an NLP model, and measures how well the network-derived representations align with the brain recordings in relevant brain regions. We further propose to measure the brain-alignment of an NLP model after conducting an intervention as a way to evaluate how effective the intervention was. An example of such an intervention is fine-tuning a model on the Winograd Schema Challenge data set to increase its common sense reasoning capabilities. Our proposed method of evaluation does not depend on an NLP test set, that may exhibit similar limitations to the intervention training set. Taken together, our methods form a principled tool that can leverage decades-worth of neurolinguistic research to interpret and evaluate NLP models.

(Brains <-> Machines) We lastly show preliminary evidence that NLP models with improved brain-alignment may benefit both NLP and neurolinguistics. We propose to investigate the benefit to NLP further by evaluating whether the brain-aligned NLP model representations are more easily adaptable to new tasks. We propose to investigate the benefit to neurolinguistics by evaluating whether the brain-aligned NLP model produces sequence-level representations that are more easily decodable from the brain.

Thesis Committee: 
Leila Wehbe (Co-chair)
Tom Mitchell (Co-chair)
Michael Tarr
Chris Dyer (DeepMind)
Tal Linzen (Johns Hopkins University)

For More Information, Please Contact: 
Keywords: