Bio

I'm a third-year PhD student at Carnegie Mellon in a joint program between Machine Learning and Neural Computation. I'm most interested in creating statistical models of cognitive processes, such as concept representation and language, and using them to answer scientific questions about the human brain. To get closer to this goal, I work with my advisor - Tom Mitchell - on machine learning methods for high-dimensional time series modeling. My research is supported by the NSF graduate fellowship. Before begining my graduate studies at CMU, I recieved a B.S. in both Computer Science and Cognitive Science at Yale University.

I also enjoy taking care of (helpful) bacteria and turning them into yogurt and sourdough bread from time to time.

Publications

Applying artificial vision models to human scene understanding
E. M. Aminoff, M. Toneva, A. Shrivastava, X. Chen, I. Misra, A. Gupta, and M. J. Tarr
Frontiers in Computational Neuroscience, 2015
[abs][pdf]
Scene-Space Encoding within the Functional Scene-Selective Network
E. M. Aminoff, M. Toneva, A. Gupta, and M. J. Tarr
Journal of Vision, 2015
[abs]
Towards a model for mid-level feature representation of scenes
M. Toneva, E. M. Aminoff, A. Gupta, and M. Tarr
Oral presentation. WIML NIPS workshop, 2014
[abs]
An Exploration of Social Grouping: Effects of Behavioral Mimicry, Appearance, and Eye Gaze
A. Nawroj, M. Toneva, H. Admoni, and B. Scassellati
Oral presentation. In proceedings of the 36th Annual Conference of the Cognitive Science Society (Cogsci 2014)
[abs] [pdf]
The Physical Presence of a Robot Tutor Increases Cognitive Learning Gains
D. Leyzberg, S. Spaulding, M. Toneva, and B. Scassellati
Poster. In Proceedings of the 34th Annual Conference of the Cognitive Science Society (Cogsci 2012)
[abs] [pdf]
Robot gaze does not reflexively cue human attention
H. Admoni, C. Bank, J. Tan, M. Toneva, and B. Scassellati
Poster. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society (Cogsci 2011)
[abs] [pdf]

Teaching

During the summer of 2016, I was fortunate to teach a lecture series on machine learning for neuroscientific applications as a part of the 2016 Multimodal Neuroimaging Training Program. MNTP is a summer program aimed at graduate students trained in neuroscience who would like to gain more experience in a different neuroscientific modality.

My goal in putting together the curriculum for this machine learning module was to give an intuitive overview of machine learning concepts that are useful for working with neuroscience data.

Lecture 1: classification (naive Bayes, SVM, kNN) & regression (linear) [audio did not work, so no video for this one] [slides]

Lecture 2: model selection (overfitting, cross validation, feature selection, regularization) & significance testing (permutation test, multiple comparison corrections) [slides]

Lecture 3: dimensionality reduction (PCA, ICA, CCA, Laplacian eigenmaps) & clustering (k-means, spectral clustering, divisive clustering, agglomerative clustering) [slides]

Lecture 4: latent variable models (HMM), reinforcement learning, deep learning (RNN, LSTM, DBN, CNN), AlphaGo algorithm details [slides]