My research is at the intersection of Machine Learning, Natural Language Processing, and Neuroscience, with a focus on building computational models of language processing in the brain that can also improve natural language processing systems. In September 2022, I'm starting a tenure-track faculty position (W2, equivalent to Assistant Professor in the U.S.) at the Max Planck Institute for Software Systems. I'm looking for students to join my group in September 2022!
I'm currently a postdoctoral fellow at the Princeton Neuroscience Institute, where I study the role of episodic memory in language comprehension both in the brain and in machines. I'm fortunate to be advised by Ken Norman and Uri Hasson, and my postdoctoral research is supported by the C.V. Starr fellowship. Prior to Princeton, I received my Ph.D. from Carnegie Mellon University in a joint program between Machine Learning and Neural Computation. Three years of my Ph.D. research were supported by the NSF graduate fellowship, awarded for interdisciplinary research in machine learning and computational neuroscience. Before beginning my graduate studies at CMU, I received a B.S. in both Computer Science and Cognitive Science at Yale University.
- Jan 2022: New paper that proposes a causal inference framework for effects of complex stimuli in the brain accepted to CLeaR 2022!
- Nov 2021: New paper on utilizing brain recordings from multiple subjects for better encoding/decoding out in Frontiers in Computational Neuro!
- Oct 2021: Organized a symposium at SNL 2021 on "What can NLP systems teach us about language in the brain?"
- Sep 2021: Awarded C.V. Starr Fellowship for postdoctoral research in computational neuroscience at Princeton University.
- July 2021: Ph.D. Thesis recognized with an Honorable Mention by the Society for the Neurobiology of Language Dissertation Award.
- July 2021: Defended my Ph.D. Thesis "Bridging Language in Machines with Language in the Brain"!
- May 2021: Co-organized an ICLR 2021 workshop on "How Can Findings About The Brain Improve AI Systems?"
- Mar 2021: Invited to speak at the Courtois Neuromod Month about NLP systems as model organisms for human language processing. [video]
- Feb 2021: New preprint on injecting linguistic structure into deep NLP models to enable more precise scientific inferences about alignments with brain recordings. Enjoyed this collaboration with CoAStaL!
- Dec 2020: Presented our work on modeling the task effects on meaning representation at NeurIPS 2020. [poster]
- Oct 2020: Gave a talk at Neuromatch 3.0 about our work on task effects on meaning representation. [video]
- Oct 2020: Invited to speak about nonlinear models for scientific discovery about language in the brain at the CCN workshop "Is it that simple? The use of linear models in cognitive neuroscience". [talk slides] [talk recording] [panel recording]
- Oct 2020: Shared a new preprint showing that combining computational controls with natural text reveals new aspects of meaning composition.
- Sep 2020: New work on modeling task effects on meaning representation in the brain accepted to NeurIPS 2020!
- Dec 2019: Thrilled to see our NeurIPS work featured by ZDNet in their NeurIPS 2019 research highlights!
- Sep 2019: Two works on language in the brain & machines accepted at NeurIPS 2019 (Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain) and Inducing brain-relevant bias in natural language processing models)!
PublicationsML venues Neuro venues Cogsci venues Preprints
Same Cause; Different Effect in the Brain
M. Toneva* , J. Williams*, A. Bollu, C. Dann, and L. Wehbe (Accepted for Publication)
CLeaR 2022 [abs] [pdf coming soon]
To study information processing in the brain, neuroscientists manipulate experimental stimuli while recording participant brain activity. They can then use encoding models to find out which brain “zone" (e.g. which region of interest, volume pixel or electrophysiology sensor) is predicted from the stimulus properties. Given the assumptions underlying this setup, when the stimulus properties are predictive of the activity in a zone, these properties are understood to cause activity in that zone. In recent years, researchers have begun using neural networks to construct representations that capture the diverse properties of complex stimuli, such as natural language or natural images. Encoding models built using these high-dimensional representations are often able to accurately predict the activity in large swathes of cortex, suggesting that the activity in all these brain zones is caused by the stimulus properties captured in the neural network representation. It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?" In neuroscientific terms, this corresponds to asking if these different zones process the stimulus properties in the same way. Here, we propose a new framework that enables researchers to ask if the properties of a stimulus affects two brain zones in the same way. We use simulated data and two real fMRI datasets with complex naturalistic stimuli to show that our framework enables us to make such inferences. Our inferences are strikingly consistent between the two datasets, indicating that the proposed framework is a promising new tool for neuroscientists to understand how information is processed in the brain.
Single-trial MEG data can be denoised through cross-subject predictive modeling
S. Ravishankar, M. Toneva, and L. Wehbe
Frontiers in Computational Neuroscience 2021 [abs] [pdf]
A pervasive challenge in brain imaging is the presence of noise that hinders investigation of underlying neural processes, with Magnetoencephalography (MEG) in particular having very low Signal-to-Noise Ratio (SNR). The established strategy to increase MEG's SNR involves averaging multiple repetitions of data corresponding to the same stimulus. However, repetition of stimulus can be undesirable, because underlying neural activity has been shown to change across trials, and repeating stimuli limits the breadth of the stimulus space experienced by subjects. In particular, the rising popularity of naturalistic studies with a single viewing of a movie or story necessitates the discovery of new approaches to increase SNR. We introduce a simple framework to reduce noise in single-trial MEG data by leveraging correlations in neural responses across subjects as they experience the same stimulus. We demonstrate its use in a naturalistic reading comprehension task with 8 subjects, with MEG data collected while they read the same story a single time. We find that our procedure results in data with reduced noise and allows for better discovery of neural phenomena. As proof-of-concept, we show that the N400m's correlation with word surprisal, an established finding in literature, is far more clearly observed in the denoised data than the original data. The denoised data also shows higher decoding and encoding accuracy than the original data, indicating that the neural signals associated with reading are either preserved or enhanced after the denoising procedure.
Does injecting linguistic structure into language models lead to better alignment with brain recordings?
M. Abdou, A. V. González, M. Toneva , D. Hershcovich, A. Søgaard
arXiv 2021 [abs] [pdf]
Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt and Bowman,2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.
Combining computational controls with natural text reveals new aspects of meaning composition
M. Toneva, T. Mitchell, and L. Wehbe
bioRxiv 2020 [abs] [pdf]
To study a core component of human intelligence---our ability to combine the meaning of words---neuroscientists look for neural correlates of meaning composition, such as brain activity proportional to the difficulty of understanding a sentence. However, little is known about the product of meaning composition---the combined meaning of words beyond their individual meaning. We term this product ``supra-word meaning” and devise a computational representation for it by using recent neural network algorithms and a new technique to disentangle composed- from individual-word meaning. Using functional magnetic resonance imaging, we reveal that hubs that are thought to process lexical-level meaning also maintain supra-word meaning, suggesting a common substrate for lexical and combinatorial semantics. Surprisingly, we cannot detect supra-word meaning in magnetoencephalography, which suggests that composed meaning is maintained through a different neural mechanism than synchronized firing. This sensitivity difference has implications for past neuroimaging results and future wearable neurotechnology.
Modeling task effects on meaning representation in the brain via zero-shot MEG prediction
M. Toneva*, O.Stretcu*, B. Poczos, L. Wehbe, and T. Mitchell
NeurIPS 2020 [abs] [pdf] [code] [video]
How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering “can you eat it?” versus “can it fly?”)? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475 − 550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.
Investigating different alignment methods between natural and artificial neural networks for language processing
A. Bollu, M. Toneva, and L. Wehbe
SNL 2020 [abs]
Aligning the internal representational spaces of state-of-the-art natural language processing (NLP) models with those of the brain has revealed a great deal of overlap in what both systems capture about their language input. Prior work investigated this alignment using linear encoding models that predict each fMRI voxel as an independent linear combination of the NLP representations. However, a linear mapping may fail to align nonlinearly encoded information within the NLP and fMRI representations, and is not well equipped to benefit from information shared among groups of voxels. Here, we investigate the effect of varying encoding model complexity on alignment performance. We align fMRI recordings of 8 participants reading naturalistic text word-by-word with intermediate representations from BERT, a state-of-the-art NLP model, that correspond to the stimulus text. We investigate three encoding models that predict the fMRI voxels as a function of the BERT representations: LinearAnalytical - linear model where weights were estimated using a closed-form solution; LinearGD - linear model trained using gradient descent (GD); MLPGD - multilayer perceptron (MLP) with one hidden layer trained using Batch GD. Two key features separate MLPGD from the linear models: (1) a nonlinear activation layer and (2) predicting all voxels jointly using a shared hidden layer. We include LinearGD to identify whether any performance differences can be attributed to the training method. We evaluate alignment performance by computing the mean Pearson correlations between predicted and true voxel activities within regions of interest (ROIs) known to be consistently activated during language processing[5,6]. We additionally evaluate each encoding model against a noise ceiling, computed based on pairwise correlations between participants’ fMRI recordings. We use paired t-tests to test for significant differences between model performance across subjects and pycortex to visualize voxel correlations on a 3D brain surface. We find no significant difference between LinearAnalytical and LinearGD (p>0.05 for all ROIs). LinearGD performs on par with the noise ceiling in 5 ROIs (p>0.2), and worse in the dorsomedial prefrontal cortex (dmPFC, p=0.009), inferior frontal gyrus pars orbitalis (IFGorb, p=0.05) and posterior cingulate (pCingulate, p=0.06), revealing room for improvement in those regions. Differences between LinearGD and MLPGD evaluated based on the whole ROIs are not significant (p>0.05), but qualitative analysis reveals smaller clusters within the dmPFC, IFGorb and pCingulate where MLPGD outperforms LinearGD. We further observe that the encoding models sometimes outperform the estimated noise ceiling, especially within the posterior temporal lobe, angular gyrus and middle frontal gyrus. Interestingly, our qualitative analysis of voxel correlations reveals clusters within the dmPFC, IFGorb and pCingulate that are better predicted by the MLP architecture. One interpretation of this finding is that these clusters may process different information from the rest of the region -- information that only a nonlinear alignment can reveal -- but further investigation is necessary. We also find that the noise ceiling computation provides suboptimal estimates. A better noise ceiling may provide stronger evidence for our observations and highlight other areas where the encoding model can be improved upon as a guide to future research. References: https://tinyurl.com/y2v23rd2
Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
M. Toneva and L. Wehbe
NeurIPS 2019 [abs] [pdf] [code]
Neural networks models for NLP are typically implemented without the explicit encoding of language rules and yet they are able to break one performance record after another. This has generated a lot of research interest in interpreting the representations learned by these networks. We propose here a novel interpretation approach that relies on the only processing system we have that does understand language: the human brain. We use brain imaging recordings of subjects reading complex natural text to interpret word and sequence embeddings from 4 recent NLP models - ELMo, USE, BERT and Transformer-XL. We study how their representations differ across layer depth, context length, and attention type. Our results reveal differences in the context-related representations across these models. Further, in the transformer models, we find an interaction between layer depth and context length, and between layer depth and attention type. We finally hypothesize that altering BERT to better align with brain recordings would enable it to also better understand language. Probing the altered BERT using syntactic NLP tasks reveals that the model with increased brain-alignment outperforms the original model. Cognitive neuroscientists have already begun using NLP networks to study the brain, and this work closes the loop to allow the interaction between NLP and cognitive neuroscience to be a true cross-pollination.
Inducing brain-relevant bias in natural language processing models
D. Schwartz, M. Toneva , and L. Wehbe
NeurIPS 2019 [abs] [pdf] [code]
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.
Investigating task effects on brain activity during stimulus presentation in MEG
M. Toneva*, O.Stretcu*, B. Poczos, and T. Mitchell
OHBM 2019 [abs]
Recorded brain activity of subjects who perceive the same stimulus (e.g. a word) while performing different semantic tasks (e.g. identifying whether the word belongs to a particular category) has been shown to differ across tasks. However, it is not well understood how precisely the task contributes to this brain activity. In the current work, we propose multiple hypotheses of how possible interactions between the task and stimulus semantics can be related to the observed brain activity. We test these hypotheses by designing machine learning models to represent each hypothesis, training them to predict the recorded brain activity, and comparing their performance. We investigate a magnetoencephalography (MEG) dataset, where subjects were tasked to answer 20 yes/no questions (e.g. `Is it manmade?') about concrete nouns and their line drawings. Each question-stimulus pair is presented only once. Here we consider each question as a different task. We show that incorporating task semantics improves the prediction of single-trial MEG data by an average of 10% across subjects.
An empirical study of example forgetting during deep neural network learning
M. Toneva*, A. Sordoni*, R. Tachet des Combes*, A. Trischler, Y. Bengio, and G. Gordon
ICLR 2019 [abs] [pdf] [code] [open review]
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
Word length processing in left lateraloccipital through region-to-region connectivity: an MEG study
M. Toneva and T. Mitchell
OHBM 2018 [abs]
A previous MEG study found that many features of stimuli can be decoded around the same time but at different places in the brain, posing the question of how information processing is coordinated between brain regions. Previous approaches to this question are of two types. The first uses a classifier or regression to uncover the relative timings of feature decodability in different brain regions. While addressing when and where information is processed, this approach does not specify how information is coordinated. The second type estimates when the connectivity between regions changes. While this approach assumes that information is coordinated by communication, it does not directly relate to the information content. We aim to more directly relate processing of information content to connectivity. We examine whether, during presentation of a word stimulus, the length of the word relates to how strongly the region that best encodes word length - left lateraloccipital cortex (LOC) - connects to other regions, at different times. For this purpose, we analyze MEG data from an experiment in which 9 subjects were presented 60 concrete nouns along with their line drawings. Our results suggest that the region that is best at encoding a perceptual stimulus feature - word length - has a connectivity network in which the connection strengths vary with the value of the feature. Furthermore, we observe this relationship prior to the peak information decodability in any region. One hypothesis is that information necessary for the processing of the feature is communicated to the seed region by varying connection strengths. Further analysis for a stimulus feature with a later decodability peak, such as a semantic feature, would add to the current results.
MEG representational similarity analysis implicates hierarchical integration in sentence processing
N. Rafidi*, D. Schwartz*, M. Toneva*, S. Jat, and T. Mitchell
OHBM 2018 [abs]
Multiple hypotheses exist for how the brain constructs sentence meaning. Most fall into two groups based on their assumptions about the processing order of the words within the sentence. The first considers a sequential processing order, while the second uses hierarchical syntactic rules. We test which hypothesis best explains MEG data recorded during reading of sentences with active and passive voice. Under the sequential hypothesis, the voice of a sentence should change its neural signature because word order changes. Under the hierarchical hypothesis, active and passive sentences corresponding to the same proposition should exhibit similar neural signatures. We test how well three language models explain MEG data collected during noun-verb-noun sentence reading. The models we test are bag of words (BoW), sequential word order, and hierarchical. All three models correlate with the MEG data for some timepoints, after verb presentation and briefly post sentence. However, the hierarchical model correlates significantly for more timepoints and is often the best correlated model even if that correlation is not significant. Our analysis shows that a hierarchical model of meaning correlates with neural activity for a longer duration than models which use a bag of words meaning representation or sequential meaning construction. Additionally, just after verb presentation the hierarchical model is the model best correlated with the MEG data. Our method enables the study of language processing hypotheses in the brain at a fine time scale and can be applied to a wide variety of language models.
Applying artificial vision models to human scene understanding
E. M. Aminoff, M. Toneva, A. Shrivastava, X. Chen, I. Misra, A. Gupta, and M. J. Tarr
Frontiers in Computational Neuroscience 2015 [abs] [pdf]
How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective—the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)—have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN—the models that best accounted for the patterns obtained from PPA and TOS—were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method—NEIL (“Never-Ending-Image-Learner”), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes—showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.
Scene-space encoding within the functional scene-selective network
E. M. Aminoff, M. Toneva, A. Gupta, and M. J. Tarr
VSS 2015 [abs]
High-level visual neuroscience has often focused on how different visual categories are encoded in the brain. For example, we know how the brain responds when viewing scenes as compared to faces or other objects - three regions are consistently engaged: the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area/transverse occipital sulcus (TOS). Here we explore the fine-grained responses of these three regions when viewing 100 different scenes. We asked: 1) Can neural signals differentiate the 100 exemplars? 2) Are the PPA, RSC, and TOS strongly activated by the same exemplars and, more generally, are the "scene-spaces" representing how scenes are encoded in these regions similar? In an fMRI study of 100 scenes we found that the scenes eliciting the greatest BOLD signal were largely the same across the PPA, RSC, and TOS. Remarkably, the orderings, from strongest to weakest, of scenes were highly correlated across all three regions (r = .82), but were only moderately correlated with non-scene selective brain regions (r = .30). The high similarity across scene-selective regions suggests that a reliable and distinguishable feature space encodes visual scenes. To better understand the potential feature space, we compared the neural scene-space to scene-spaces defined by either several different computer vision models or behavioral measures of scene similarity. Computer vision models that rely on more complex, mid- to high-level visual features best accounted for the pattern of BOLD signal in scene-selective regions and, interestingly, the better-performing models exceeded the performance of our behavioral measures. These results suggest a division of labor where the representations within the PPA and TOS focus on visual statistical regularities within scenes, whereas the representations within the RSC focus on a more high-level representation of scene category. Moreover, the data suggest the PPA mediates between the processing of the TOS and RSC.
Towards a model for mid-level feature representation of scenes
M. Toneva, E. M. Aminoff, A. Gupta, and M. Tarr
WIML workshop at NeurIPS 2014 Oral presentation [abs]
Never Ending Image Learner (NEIL) is a semi-supervised learning algorithm that continuously pulls images from the web and learns relationships among them. NEIL has classified over 400,000 images into 917 scene categories using 84 dimensions - termed “attributes”. These attributes roughly correspond to mid-level visual features whose differential combinations define a large scene space. As such, NEIL’s small set of attributes offers a candidate model for the psychological and neural representation of scenes. To investigate this, we tested for significant similarities between the structure of scene space defined by NEIL and the structure of scene space defined by patterns of human BOLD responses as measured by fMRI. The specific scenes in our study were selected by reducing the number of attributes to the 39 that best accounted for variance in NEIL’s scene-attribute co-classification scores. Fifty scene categories were then selected such that each category scored highly on a different set of at most 3 of the 39 attributes. We then selected the two most representative images of the corresponding high-scoring attributes from each scene category, resulting in a total of 100 stimuli used. Canonical correlation analyses (CCA) was used to test the relationship between measured BOLD patterns within the functionally-defined parahippocampal region and NEIL’s representation of each stimulus as a vector containing stimulus-attribute co-classification scores on the 39 attributes. CCA revealed significant similarity between the local structures of the fMRI data and the NEIL representations for all participants. In contrast, neither the entire set of 84 attributes nor 39 randomly-chosen attributes produced significant results using this CCA method. Overall, our results indicate that subsets of the attributes learned by NEIL are effective in accounting for variation in the neural encoding of scenes – as such they represent a first pass compositional model of mid-level features for scene representation.
An exploration of social grouping: effects of behavioral mimicry, appearance, and eye gaze
A. Nawroj, M. Toneva, H. Admoni, and B. Scassellati
CogSci 2014 Oral presentation [abs] [pdf]
People naturally and easily establish social groupings based on appearance, behavior, and other nonverbal signals. However, psychologists have yet to understand how these varied signals interact. For example, which factor has the strongest effect on establishing social groups? What happens when two of the factors conflict? Part of the difficulty of answering these questions is that people are unique and stochastic stimuli. To address this problem, we use robots as a visually simple and precisely controllable platform for examining the relative in- fluence of social grouping features. We examine how behavioral mimicry, similarity of appearance, and direction of gaze influence peoples’ perception of which group a robot belongs to. Experimental data shows that behavioral mimicry has the most dominant influence on social grouping, though this influence is modulated by appearance. Non-mutual gaze was found to be a weak modulator of the perception of grouping. These results provide insight into the phenomenon of social grouping, and suggest areas for future exploration.
The physical presence of a robot tutor increases cognitive learning gains
D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati
CogSci 2012 [abs] [pdf]
We present the results of a 100 participant study on the role of a robot's physical presence in a robot tutoring task. Participants were asked to solve a set of puzzles while being provided occasional gameplay advice by a robot tutor. Each participant was assigned one of five conditions: (1) no advice, (2) robot providing randomized advice, (3) voice of the robot providing personalized advice, (4) video representation of the robot providing personalized advice, or (5) physically-present robot providing personalized advice. We assess the tutor's effectiveness by the time it takes participants to complete the puzzles. Participants in the robot providing personalized advice group solved most puzzles faster on average and improved their same-puzzle solving time significantly more than participants in any other group. Our study is the first to assess the effect of the physical presence of a robot in an automated tutoring interaction. We conclude that physical embodiment can produce measurable learning gains.
Robot gaze does not reflexively cue human attention
H. Admoni, C. Bank, J. Tan, M. Toneva, and B. Scassellati
CogSci 2011 [abs] [pdf]
Joint visual attention is a critical aspect of typical human interactions. Psychophysics experiments indicate that people exhibit strong reflexive attention shifts in the direction of another person's gaze, but not in the direction of non-social cues such as arrows. In this experiment, we ask whether robot gaze elicits the same reflexive cueing effect as human gaze. We consider two robots, Zeno and Keepon, to establish whether differences in cueing depend on level of robot anthropomorphism. Using psychophysics methods for measuring attention by analyzing time to identification of a visual probe, we compare attention shifts elicited by five directional stimuli: a photograph of a human face, a line drawing of a human face, Zeno's gaze, Keepon's gaze and an arrow. Results indicate that all stimuli convey directional information, but that robots fail to elicit attentional cueing effects that are evoked by non-robot stimuli, regardless of robot anthropomorphism.
Machine Learning for Neuroscience Instructor as part of the 2016 Multimodal Neuroimaging Training Program
Lecture 1 [slides][audio did not work, so no video for this one]
- naive Bayes
- linear regression
Lecture 2 [slides]
- model selection:
- cross validation
- feature selection
- significance testing:
- permutation test
- multiple comparisons corrections
Lecture 3 [slides]
- dimensionality reduction:
- PCA and ICA
- Laplacian eigenmaps
- spectral clustering
- divisive and agglomerative clustering
Lecture 4 [slides]
- latent variable models (Hidden Markov Models)
- reinforcement learning
- deep learning
- common architectures and their uses: RNN, LSTM, DBN, CNN
- AlphaGo algorithm details
Mathematical Neuroscience Teaching Assistant. Course taught by Brent Doiron.