I work on summarization and other conditional text generation tasks. My research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, my work is in long-doc and multi-doc summarization.
I’m also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults. Right now, some great collaborators and I are interviewing people about paradigm shifts in NLP; if you’ve published 3+ papers in NLP-related venues and you’d have time for a 60 minute interview, please reach out!
Before coming to CMU, I received my bachelors in math and computer science from the University of Arizona, where I was advised by Steven Bethard.
In my spare time, I write and read speculative fiction and play tabletop games.
|Dec 7, 2022||I’ll be presenting our Findings paper on style transfer for dialogue summarization in the GEM poster session at EMNLP 2022!|
|Jul 15, 2022||I co-presented work on bias transfer from pretraining datasets at the Gender Bias in NLP workshop at NAACL 2022.|
|Nov 11, 2021||I presented my undergraduate thesis work on bias detection at the 2021 Workshop on Noisy User-generated Text!|
FindingsHe Said, She Said: Style Transfer for Shifting the Perspective of DialoguesIn Findings of the Association for Computational Linguistics: EMNLP 2022. 2022
GeBNLPEvaluating Gender Bias Transfer from Film DataIn Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) Jul 2022
W-NUTDetection of Puffery on the English WikipediaIn Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021) Nov 2021