This is an exciting time to be studying language in the brain. Newly proposed NLP methods that can represent the meaning of sequences of words allows us to relate representations of the meaning of text to the brain activity acquired when participants read that text. What can this tell us about the brain? What can it tell us about those NLP models? Is there a benefit from combining both into a common model? In this talk I will set up the background behind this approach and discuss recent progress along these three topics.
Leila Wehbe is an assistant professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Previously, she was a postdoctoral researcher at the Helen Wills Neuroscience Institute at UC Berkeley, working with Jack Gallant. She obtained her PhD from the Machine Learning Department at Carnegie Mellon University, where she worked with Tom Mitchell. She studies language representations in the brain when subjects engage in naturalistic language tasks by combining functional neuroimaging with natural language processing and machine learning.