Investigating High-Level Representations in the Brain

I use functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalography (MEG) to investigate how the brain represents the meaning of words, sentences and stories.

FMRI and MEG record brain activity. They yield very high dimensional, noisy images. These images are also expensive to acquire. The number of data points in a typical experiment is therefore many orders of magnitude smaller than the number of data dimensions. Furthermore, there is a considerable subject-to-subject variability of brain anatomy. Combining data from multiple subjects is consequently a hard problem. Part of my work is finding Machine Learning solutions to these brain image problems.

Another part of my work is defined by the complexity of language and the inexistence of a comprehensive model of meaning composition: we do not know how the meaning of successive words combine to form the meaning of a sentence. Investigating the brain representation of a sentence is therefore a complex task because we are both looking for the neural signature and trying to approximate the composition function. However, with appropriate experimental settings and computational models, we can study both problems: we can use existing models of language to study the brain representation of meaning, or we can use brain data to evaluate different meaning composition hypotheses.