Current Research Group

- Maggie Henderson (postdoc, co-advised by Mike Tarr)

- Aria Wang (PNC/MLD PhD student, co-advised by Mike Tarr)

- Jennifer Williams (CBD PhD student)

- Ruogu Lin (CBD PhD student)

- Andrew Luo (PNC/MLD PhD student, co-advised by Mike Tarr)

- Tara Pirnia (PNC/MLD PhD student, co-advised by Bonnie Nozari)

- Joel Ye (PNC PhD student, co-advised by Rob Gaunt)

- Yuchen Zhou (CNBC/Psych PhD student, co-advised by Mike Tarr)


- Mariya Toneva (MLD PhD student, co-advised by Tom Mitchell) - Assistant Professor at the Max Plank Institute for Software Systems

- Anand Bollu (CS Master's student) - Applied Intuition

- Srinivas Ravishankar (MLD Master's student) - IBM research

- Aniketh Reddy (MLD Master's student) - UC Berkeley PhD student

- Nidhi Jain (CS Master's student)

Aligning representations from artificial networks and
real brains

Success in AI is often defined as achieving human level performance on tasks such as text or scene understanding. To perform like the human brain, is it useful for neural networks to have representations that are similar to brain representations?

In these projects, we use brain activity recordings to interpret neural network representations, to attempt to find heuristics to improve them, and even to change the weights learned by networks to make them more brain like. The results promise an exciting research direction.

The spatial representation of language sub-processes

In this project, we use functional Magnetic Resonance Imaging (fMRI) to record the brain activity of subjects while they read an unmodified chapter of a popular book. We model the measured brain activity as a function of the content of the text being read. Our model is able to extrapolate to predict brain activity for novel passages of text - beyond those on which it has been trained. Not only can our model be used for decoding what passage of text was being read from brain activity, but it can also report which type of information about the text (syntax, semantic properties, narrative events etc.) is modulating the activity of every brain region. Using this model, we found that the different regions that are usually associated with language appear to be processing different types of linguistic information. We were able to build detailed reading representations maps, in which each voxel is labeled by the type of information the model suggests it is processing.

Our approach is important in many ways. We are able not only to detect where language processing increases brain activity, but to also reveal what type of information is encoded in each one of the regions that are classically reported as responsive to language. From just one experiment, we can reproduce a mutiple findings. Had we chosen to follow the classical method, each of our results would have required its own experiment. This approach could make neuroimaging much more flexible. If a researcher develops a new reading theory after running an experiment, they would annotate the stimulus text accordingly, and test the theory against the previously recorded data without having to collect new experimental data.

The time-line of meaning construction

To study the sub-word dynamics of story reading, we turned to Magnetoencephalography (MEG), which records brain activity at a time resolution of one millisecond. We recorded the MEG activity when the subjects undergo the same naturalistic task of reading a complex chapter from a popular novel. We were interested in identifying the different stages of continuous meaning construction when subjects read a text. We noticed the similarity between neural network language models which can ``read" a text word by word and predict the next word in a sentence, and the human brain. Both the models and the brain have to maintain a representation of the previous context, they have to represent the features of the incoming word and integrate it with the previous context before moving on to the next word.

We used the neural network language model to detect these different processes in brain data. Our novel results include a suggested time-line of how the brain updates its representation of context. They also demonstrate the incremental perception of every new word starting early in the visual cortex, moving next to the temporal lobes and finally to the frontal regions. Furthermore, the results suggest the integration process occurs in the temporal lobes after the new word has been perceived.