Mariya Toneva


C.V. Starr Postdoctoral Fellow
Neuroscience Institute
Princeton University

Tenure-track faculty (W2), starting September 2022
Max Planck Institute for Software Systems

Email: mtoneva [at] mpi-sws [dot] org
Curriculum Vitae

Bio


My research is at the intersection of Machine Learning, Natural Language Processing, and Neuroscience, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems. See this 5min video for a brief summary of my research highlights and future directions.

In September 2022, I'm starting a tenure-track faculty position (W2, equivalent to Assistant Professor in the U.S.) at the Max Planck Institute for Software Systems. I'm looking for PhD students to join my group in September 2022!

I'm currently a postdoctoral fellow at the Princeton Neuroscience Institute, where I study the role of episodic memory in language comprehension both in the brain and in machines together with Ken Norman and Uri Hasson. My postdoctoral research is supported by the C.V. Starr fellowship. Prior to Princeton, I received my Ph.D. from Carnegie Mellon University in a joint program between Machine Learning and Neural Computation. Three years of my Ph.D. research were supported by the NSF graduate fellowship, awarded for interdisciplinary research in machine learning and computational neuroscience. Before beginning my graduate studies at CMU, I received a B.S. in both Computer Science and Cognitive Science at Yale University.

I also enjoy taking care of (helpful) bacteria and yeast and turning them into Bulgarian yogurt, kombucha, and sourdough bread from time to time.

News


Publications

ML venues Neuro venues Cogsci venues Preprints
Memory for long narratives
M. Toneva , V. Vo, J. Turek, S. Jain, S. Michelmann, M. Capotă, A. Huth, U. Hasson, and K. Norman
CEMS 2022

Language is the primary way in which we communicate, and yet it is not clear how we draw on previous experiences to understand language. In this work, we aim to investigate the role of episodic memory in language comprehension, by building models of this process and by collecting new benchmark datasets. As an initial step, we sought to characterize how well people remember information from long narratives, by asking participants to recall chapters of a recently-read novel when cued with a passage from the start of the chapter. We evaluated the precision of this recall by comparing its semantic representations--constructed using a language model--to those of the corresponding chapters. Analyses of the data are ongoing. In preliminary analyses, we find that a number of events were recalled with high precision across participants, and we do not find an effect of event position within a chapter on the precision of recall.


The Courtois Neuromod project: a deep, multi-domain fMRI dataset to build individual brain models
J. Boyle*, B. Pinsard*, V. Borghesani, M. Saint-Laurent, F. Lespinasse, F. Paugam, P. Sainath, S. Rastegarnia­, A. Boré, J. Chen, A. Cyr, E. Dessureault, E. DuPre, Y. Harel, M. Toneva , S. Belleville, S. Brambati, J. Cohen-Adad, A. Fuente, M. Hebart, K. Jerbi, P. Rainville, L. Wehbe, and P. Bellec
OHBM 2022 Oral presentation

Several large individual fMRI datasets have emerged to train artificial intelligence (AI) models on specific cognitive processes, including natural images (NSD1, BOLD500022) and movie viewing (Dr Who3). However, a key feature of the brain is the capacity to integrate and switch between specialized processes and cognitive contexts. The Courtois Project on Neuronal Modelling (CNeuroMod) is creating a rich neuroimaging dataset to probe numerous cognitive domains simultaneously, in the same subjects, with carefully controlled and /or naturalistic stimuli, in order to build integrative AI models. CNeuroMod will eventually feature hundreds of hours of neuroimaging data per subject, and is already the largest individual fMRI dataset currently available.
CNeuroMod features fMRI recordings from 6 English-speaking participants (3 women). 4 subjects are scanned acutely (80h+ / year) and 2 are scanned intensively (40h+ / year). The 4 acutely scanned participants have reached, or are close to, 100h of MRI data. Information on previously reported datasets (hcptrt, movie10, shinobi, mario, triplet, friends and things) are available at https://docs.cneuromod.ca. Here, we highlight 8 new datasets to (1) validate our set-up, (2) map functional areas, and (3) expand the set of naturalistic stimuli covered. First, the effectiveness of our auditory protection protocol and the reactivity of our custom-built controller will be assessed, respectively - audition task (i.e, auditory threshold inside and outside the mri) and gamepad task (i.e. comparing motor responses using custom vs commercial controller). Mapping of visual areas will be possible thanks to a classical retinotopy task (retino), and a functional localizer (fLoc) isolating category-selective cortical regions5. localizers consists of multiple sessions of language tasks spanning sensory modalities (auditory6, visual7), & languages (French and English8). potter dataset consists of reading chapter 9 from Harry Potter book to investigate language processing9. Finally, multfs is a study of working memory using different tasks, stimuli and features. emotions is passive watching of annotated, emotionally evocative short videos10. Preprocessed imaging data, behavioural responses, and physiological recording are formatted in BIDS and available using a registered access system and the DataLad version control tool. Data request is available via our website - https://www.cneuromod.ca/.
The CNeuroMod project has assembled an unprecedented resource to study individual functional brain activity for a wide range of controlled and naturalistic stimuli. For each type of stimuli included in CNeuroMod, the relevant subset of data is one of the largest dataset available for the community. Taken together, CNeuroMod opens a unique avenue to create AI models of integrative processes in the brain. We anticipate that this wealth of longitudinal data will help researchers discover novel insights into the way human brains process complex, naturalistic stimuli.


Same cause; different effect in the brain
M. Toneva* , J. Williams*, A. Bollu, C. Dann, and L. Wehbe
CLeaR 2022 [pdf] [code]

To study information processing in the brain, neuroscientists manipulate experimental stimuli while recording participant brain activity. They can then use encoding models to find out which brain “zone" (e.g. which region of interest, volume pixel or electrophysiology sensor) is predicted from the stimulus properties. Given the assumptions underlying this setup, when the stimulus properties are predictive of the activity in a zone, these properties are understood to cause activity in that zone. In recent years, researchers have begun using neural networks to construct representations that capture the diverse properties of complex stimuli, such as natural language or natural images. Encoding models built using these high-dimensional representations are often able to accurately predict the activity in large swathes of cortex, suggesting that the activity in all these brain zones is caused by the stimulus properties captured in the neural network representation. It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?" In neuroscientific terms, this corresponds to asking if these different zones process the stimulus properties in the same way. Here, we propose a new framework that enables researchers to ask if the properties of a stimulus affects two brain zones in the same way. We use simulated data and two real fMRI datasets with complex naturalistic stimuli to show that our framework enables us to make such inferences. Our inferences are strikingly consistent between the two datasets, indicating that the proposed framework is a promising new tool for neuroscientists to understand how information is processed in the brain.


A roadmap to reverse engineering real-world generalization by combining naturalistic paradigms, deep sampling, and predictive computational models
P. Herholz, E. Fortier, M. Toneva , N. Farrugia, L. Wehbe, V. Borghesani
arXiv 2022 [pdf]

Real-world generalization, e.g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences. Such a seemingly easy behavioral choice requires the interplay of multiple neural mechanisms, from integrative encoding to category-based inference, weighted differently according to the circumstances. Here, we argue that a comprehensive theory of the neuro-cognitive substrates of real-world generalization will greatly benefit from empirical research with three key elements. First, the ecological validity provided by multimodal, naturalistic paradigms. Second, the model stability afforded by deep sampling. Finally, the statistical rigor granted by predictive modeling and computational controls.


Single-trial MEG data can be denoised through cross-subject predictive modeling
S. Ravishankar, M. Toneva, and L. Wehbe
Frontiers in Computational Neuroscience 2021 [pdf]

A pervasive challenge in brain imaging is the presence of noise that hinders investigation of underlying neural processes, with Magnetoencephalography (MEG) in particular having very low Signal-to-Noise Ratio (SNR). The established strategy to increase MEG's SNR involves averaging multiple repetitions of data corresponding to the same stimulus. However, repetition of stimulus can be undesirable, because underlying neural activity has been shown to change across trials, and repeating stimuli limits the breadth of the stimulus space experienced by subjects. In particular, the rising popularity of naturalistic studies with a single viewing of a movie or story necessitates the discovery of new approaches to increase SNR. We introduce a simple framework to reduce noise in single-trial MEG data by leveraging correlations in neural responses across subjects as they experience the same stimulus. We demonstrate its use in a naturalistic reading comprehension task with 8 subjects, with MEG data collected while they read the same story a single time. We find that our procedure results in data with reduced noise and allows for better discovery of neural phenomena. As proof-of-concept, we show that the N400m's correlation with word surprisal, an established finding in literature, is far more clearly observed in the denoised data than the original data. The denoised data also shows higher decoding and encoding accuracy than the original data, indicating that the neural signals associated with reading are either preserved or enhanced after the denoising procedure.


Does injecting linguistic structure into language models lead to better alignment with brain recordings?
M. Abdou, A. V. González, M. Toneva , D. Hershcovich, A. Søgaard
arXiv 2021 [pdf]

Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman,2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.


Combining computational controls with natural text reveals new aspects of meaning composition
M. Toneva, T. Mitchell, and L. Wehbe
bioRxiv 2020 [pdf]

To study a core component of human intelligence---our ability to combine the meaning of words---neuroscientists look for neural correlates of meaning composition, such as brain activity proportional to the difficulty of understanding a sentence. However, little is known about the product of meaning composition---the combined meaning of words beyond their individual meaning. We term this product ``supra-word meaning” and devise a computational representation for it by using recent neural network algorithms and a new technique to disentangle composed- from individual-word meaning. Using functional magnetic resonance imaging, we reveal that hubs that are thought to process lexical-level meaning also maintain supra-word meaning, suggesting a common substrate for lexical and combinatorial semantics. Surprisingly, we cannot detect supra-word meaning in magnetoencephalography, which suggests that composed meaning is maintained through a different neural mechanism than synchronized firing. This sensitivity difference has implications for past neuroimaging results and future wearable neurotechnology.


Modeling task effects on meaning representation in the brain via zero-shot MEG prediction
M. Toneva*, O.Stretcu*, B. Poczos, L. Wehbe, and T. Mitchell
NeurIPS 2020 [pdf] [code] [video]

How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering “can you eat it?” versus “can it fly?”)? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475 − 550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.


Investigating different alignment methods between natural and artificial neural networks for language processing
A. Bollu, M. Toneva, and L. Wehbe
SNL 2020

Aligning the internal representational spaces of state-of-the-art natural language processing (NLP) models with those of the brain has revealed a great deal of overlap in what both systems capture about their language input. Prior work investigated this alignment using linear encoding models that predict each fMRI voxel as an independent linear combination of the NLP representations[1]. However, a linear mapping may fail to align nonlinearly encoded information within the NLP and fMRI representations, and is not well equipped to benefit from information shared among groups of voxels. Here, we investigate the effect of varying encoding model complexity on alignment performance. We align fMRI recordings of 8 participants reading naturalistic text word-by-word with intermediate representations from BERT[2], a state-of-the-art NLP model, that correspond to the stimulus text[3]. We investigate three encoding models that predict the fMRI voxels as a function of the BERT representations: LinearAnalytical - linear model where weights were estimated using a closed-form solution; LinearGD - linear model trained using gradient descent (GD); MLPGD - multilayer perceptron (MLP) with one hidden layer trained using Batch GD. Two key features separate MLPGD from the linear models: (1) a nonlinear activation layer and (2) predicting all voxels jointly using a shared hidden layer. We include LinearGD to identify whether any performance differences can be attributed to the training method. We evaluate alignment performance by computing the mean Pearson correlations[4] between predicted and true voxel activities within regions of interest (ROIs) known to be consistently activated during language processing[5,6]. We additionally evaluate each encoding model against a noise ceiling[7], computed based on pairwise correlations between participants’ fMRI recordings. We use paired t-tests to test for significant differences between model performance across subjects and pycortex[8] to visualize voxel correlations on a 3D brain surface. We find no significant difference between LinearAnalytical and LinearGD (p>0.05 for all ROIs). LinearGD performs on par with the noise ceiling in 5 ROIs (p>0.2), and worse in the dorsomedial prefrontal cortex (dmPFC, p=0.009), inferior frontal gyrus pars orbitalis (IFGorb, p=0.05) and posterior cingulate (pCingulate, p=0.06), revealing room for improvement in those regions. Differences between LinearGD and MLPGD evaluated based on the whole ROIs are not significant (p>0.05), but qualitative analysis reveals smaller clusters within the dmPFC, IFGorb and pCingulate where MLPGD outperforms LinearGD. We further observe that the encoding models sometimes outperform the estimated noise ceiling, especially within the posterior temporal lobe, angular gyrus and middle frontal gyrus. Interestingly, our qualitative analysis of voxel correlations reveals clusters within the dmPFC, IFGorb and pCingulate that are better predicted by the MLP architecture. One interpretation of this finding is that these clusters may process different information from the rest of the region -- information that only a nonlinear alignment can reveal -- but further investigation is necessary. We also find that the noise ceiling computation provides suboptimal estimates. A better noise ceiling may provide stronger evidence for our observations and highlight other areas where the encoding model can be improved upon as a guide to future research. References: https://tinyurl.com/y2v23rd2


Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
M. Toneva and L. Wehbe
NeurIPS 2019 [pdf] [code]

Neural networks models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. This has generated a lot of research interest in interpreting the representations learned by these networks. We propose here a novel interpretation approach that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language. Probing the altered BERT using syntactic NLP tasks reveals that the model with increased brain-alignment outperforms the original model. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.


Inducing brain-relevant bias in natural language processing models
D. Schwartz, M. Toneva , and L. Wehbe
NeurIPS 2019 [pdf] [code]

Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.


Investigating task effects on brain activity during stimulus presentation in MEG
M. Toneva*, O.Stretcu*, B. Poczos, and T. Mitchell
OHBM 2019

Recorded brain activity of subjects who perceive the same stimulus (e.g. a word) while performing different semantic tasks (e.g. identifying whether the word belongs to a particular category) has been shown to differ across tasks. However, it is not well understood how precisely the task contributes to this brain activity. In the current work, we propose multiple hypotheses of how possible interactions between the task and stimulus semantics can be related to the observed brain activity. We test these hypotheses by designing machine learning models to represent each hypothesis, training them to predict the recorded brain activity, and comparing their performance. We investigate a magnetoencephalography (MEG) dataset, where subjects were tasked to answer 20 yes/no questions (e.g. `Is it manmade?') about concrete nouns and their line drawings. Each question-stimulus pair is presented only once. Here we consider each question as a different task. We show that incorporating task semantics improves the prediction of single-trial MEG data by an average of 10% across subjects.


An empirical study of example forgetting during deep neural network learning
M. Toneva*, A. Sordoni*, R. Tachet des Combes*, A. Trischler, Y. Bengio, and G. Gordon
ICLR 2019 [pdf] [code] [open review]

Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.


Word length processing in left lateraloccipital through region-to-region connectivity: an MEG study
M. Toneva and T. Mitchell
OHBM 2018

A previous MEG study found that many features of stimuli can be decoded around the same time but at different places in the brain, posing the question of how information processing is coordinated between brain regions. Previous approaches to this question are of two types. The first uses a classifier or regression to uncover the relative timings of feature decodability in different brain regions. While addressing when and where information is processed, this approach does not specify how information is coordinated. The second type estimates when the connectivity between regions changes. While this approach assumes that information is coordinated by communication, it does not directly relate to the information content. We aim to more directly relate processing of information content to connectivity. We examine whether, during presentation of a word stimulus, the length of the word relates to how strongly the region that best encodes word length - left lateraloccipital cortex (LOC) - connects to other regions, at different times. For this purpose, we analyze MEG data from an experiment in which 9 subjects were presented 60 concrete nouns along with their line drawings. Our results suggest that the region that is best at encoding a perceptual stimulus feature - word length - has a connectivity network in which the connection strengths vary with the value of the feature. Furthermore, we observe this relationship prior to the peak information decodability in any region. One hypothesis is that information necessary for the processing of the feature is communicated to the seed region by varying connection strengths. Further analysis for a stimulus feature with a later decodability peak, such as a semantic feature, would add to the current results.


MEG representational similarity analysis implicates hierarchical integration in sentence processing
N. Rafidi*, D. Schwartz*, M. Toneva*, S. Jat, and T. Mitchell
OHBM 2018

Multiple hypotheses exist for how the brain constructs sentence meaning. Most fall into two groups based on their assumptions about the processing order of the words within the sentence. The first considers a sequential processing order, while the second uses hierarchical syntactic rules. We test which hypothesis best explains MEG data recorded during reading of sentences with active and passive voice. Under the sequential hypothesis, the voice of a sentence should change its neural signature because word order changes. Under the hierarchical hypothesis, active and passive sentences corresponding to the same proposition should exhibit similar neural signatures. We test how well three language models explain MEG data collected during noun-verb-noun sentence reading. The models we test are bag of words (BoW), sequential word order, and hierarchical. All three models correlate with the MEG data for some timepoints, after verb presentation and briefly post sentence. However, the hierarchical model correlates significantly for more timepoints and is often the best correlated model even if that correlation is not significant. Our analysis shows that a hierarchical model of meaning correlates with neural activity for a longer duration than models which use a bag of words meaning representation or sequential meaning construction. Additionally, just after verb presentation the hierarchical model is the model best correlated with the MEG data. Our method enables the study of language processing hypotheses in the brain at a fine time scale and can be applied to a wide variety of language models.


Applying artificial vision models to human scene understanding
E. M. Aminoff, M. Toneva, A. Shrivastava, X. Chen, I. Misra, A. Gupta, and M. J. Tarr
Frontiers in Computational Neuroscience 2015 [pdf]

How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.


Scene-space encoding within the functional scene-selective network
E. M. Aminoff, M. Toneva, A. Gupta, and M. J. Tarr
VSS 2015

High-level visual neuroscience has often focused on how different visual categories are encoded in the brain. For example, we know how the brain responds when viewing scenes as compared to faces or other objects - three regions are consistently engaged: the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area/transverse occipital sulcus (TOS). Here we explore the fine-grained responses of these three regions when viewing 100 different scenes. We asked: 1) Can neural signals differentiate the 100 exemplars? 2) Are the PPA, RSC, and TOS strongly activated by the same exemplars and, more generally, are the "scene-spaces" representing how scenes are encoded in these regions similar? In an fMRI study of 100 scenes we found that the scenes eliciting the greatest BOLD signal were largely the same across the PPA, RSC, and TOS. Remarkably, the orderings, from strongest to weakest, of scenes were highly correlated across all three regions (r = .82), but were only moderately correlated with non-scene selective brain regions (r = .30). The high similarity across scene-selective regions suggests that a reliable and distinguishable feature space encodes visual scenes. To better understand the potential feature space, we compared the neural scene-space to scene-spaces defined by either several different computer vision models or behavioral measures of scene similarity. Computer vision models that rely on more complex, mid- to high-level visual features best accounted for the pattern of BOLD signal in scene-selective regions and, interestingly, the better-performing models exceeded the performance of our behavioral measures. These results suggest a division of labor where the representations within the PPA and TOS focus on visual statistical regularities within scenes, whereas the representations within the RSC focus on a more high-level representation of scene category. Moreover, the data suggest the PPA mediates between the processing of the TOS and RSC.


Towards a model for mid-level feature representation of scenes
M. Toneva, E. M. Aminoff, A. Gupta, and M. Tarr
WIML workshop at NeurIPS 2014 Oral presentation

Never Ending Image Learner (NEIL) is a semi-supervised learning algorithm that continuously pulls images from the web and learns relationships among them. NEIL has classified over 400,000 images into 917 scene categories using 84 dimensions - termed “attributes”. These attributes roughly correspond to mid-level visual features whose differential combinations define a large scene space. As such, NEIL’s small set of attributes offers a candidate model for the psychological and neural representation of scenes. To investigate this, we tested for significant similarities between the structure of scene space defined by NEIL and the structure of scene space defined by patterns of human BOLD responses as measured by fMRI. The specific scenes in our study were selected by reducing the number of attributes to the 39 that best accounted for variance in NEIL’s scene-attribute co-classification scores. Fifty scene categories were then selected such that each category scored highly on a different set of at most 3 of the 39 attributes. We then selected the two most representative images of the corresponding high-scoring attributes from each scene category, resulting in a total of 100 stimuli used. Canonical correlation analyses (CCA) was used to test the relationship between measured BOLD patterns within the functionally-defined parahippocampal region and NEIL’s representation of each stimulus as a vector containing stimulus-attribute co-classification scores on the 39 attributes. CCA revealed significant similarity between the local structures of the fMRI data and the NEIL representations for all participants. In contrast, neither the entire set of 84 attributes nor 39 randomly-chosen attributes produced significant results using this CCA method. Overall, our results indicate that subsets of the attributes learned by NEIL are effective in accounting for variation in the neural encoding of scenes – as such they represent a first pass compositional model of mid-level features for scene representation.


An exploration of social grouping: effects of behavioral mimicry, appearance, and eye gaze
A. Nawroj, M. Toneva, H. Admoni, and B. Scassellati
CogSci 2014 Oral presentation [pdf]

People naturally and easily establish social groupings based on appearance, behavior, and other nonverbal signals. However, psychologists have yet to understand how these varied signals interact. For example, which factor has the strongest effect on establishing social groups? What happens when two of the factors conflict? Part of the difficulty of answering these questions is that people are unique and stochastic stimuli. To address this problem, we use robots as a visually simple and precisely controllable platform for examining the relative in- fluence of social grouping features. We examine how behavioral mimicry, similarity of appearance, and direction of gaze influence peoples’ perception of which group a robot belongs to. Experimental data shows that behavioral mimicry has the most dominant influence on social grouping, though this influence is modulated by appearance. Non-mutual gaze was found to be a weak modulator of the perception of grouping. These results provide insight into the phenomenon of social grouping, and suggest areas for future exploration.


The physical presence of a robot tutor increases cognitive learning gains
D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati
CogSci 2012 [pdf]

We present the results of a 100 participant study on the role of a robot's physical presence in a robot tutoring task. Participants were asked to solve a set of puzzles while being provided occasional gameplay advice by a robot tutor. Each participant was assigned one of five conditions: (1) no advice, (2) robot providing randomized advice, (3) voice of the robot providing personalized advice, (4) video representation of the robot providing personalized advice, or (5) physically-present robot providing personalized advice. We assess the tutor's effectiveness by the time it takes participants to complete the puzzles. Participants in the robot providing personalized advice group solved most puzzles faster on average and improved their same-puzzle solving time significantly more than participants in any other group. Our study is the first to assess the effect of the physical presence of a robot in an automated tutoring interaction. We conclude that physical embodiment can produce measurable learning gains.


Robot gaze does not reflexively cue human attention
H. Admoni, C. Bank, J. Tan, M. Toneva, and B. Scassellati
CogSci 2011 [pdf]

Joint visual attention is a critical aspect of typical human interactions. Psychophysics experiments indicate that people exhibit strong reflexive attention shifts in the direction of another person's gaze, but not in the direction of non-social cues such as arrows. In this experiment, we ask whether robot gaze elicits the same reflexive cueing effect as human gaze. We consider two robots, Zeno and Keepon, to establish whether differences in cueing depend on level of robot anthropomorphism. Using psychophysics methods for measuring attention by analyzing time to identification of a visual probe, we compare attention shifts elicited by five directional stimuli: a photograph of a human face, a line drawing of a human face, Zeno's gaze, Keepon's gaze and an arrow. Results indicate that all stimuli convey directional information, but that robots fail to elicit attentional cueing effects that are evoked by non-robot stimuli, regardless of robot anthropomorphism.

Teaching


Machine Learning for Neuroscience Instructor as part of the 2016 Multimodal Neuroimaging Training Program

  • Lecture 1 [slides][audio did not work, so no video for this one]
    • classification:
      • naive Bayes
      • SVM
      • kNN
    • linear regression
  • Lecture 2 [slides]
    • model selection:
      • overfitting
      • cross validation
      • feature selection
      • regularization
    • significance testing:
      • permutation test
      • multiple comparisons corrections
  • Lecture 3 [slides]
    • dimensionality reduction:
      • PCA and ICA
      • CCA
      • Laplacian eigenmaps
    • clustering
      • k-means
      • spectral clustering
      • divisive and agglomerative clustering
  • Lecture 4 [slides]
    • latent variable models (Hidden Markov Models)
    • reinforcement learning
    • deep learning
      • common architectures and their uses: RNN, LSTM, DBN, CNN
      • AlphaGo algorithm details

Convex Optimization Teaching Assistant. Course taught by Ryan Tibshirani and Javier Peña. Recognized with the ML TA award.

Mathematical Neuroscience Teaching Assistant. Course taught by Brent Doiron.