I am a PhD student in the Language Technologies Institute at Carnegie Mellon University, advised by Matt Gormley and Graham Neubig. I am a member of NeuLab and an organizer for Queer in AI. I’m fortunate to be funded by an NSF Graduate Research Fellowship.
I work primarily on conditional generation, particularly summarization; my research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, I’m excited about modeling for long-range dependencies and long or complex inputs. I’m also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults.
I’m trying to get to know my academic neighbors! If we work on similar things (or very different things that might be connected in interesting ways), I’d love to chat– please email me :) I’m also looking for internships for Summer 2024.
Before coming to CMU, I received my bachelors in math and computer science from the University of Arizona, where I was advised by Steven Bethard. Before coming to NLP, I worked in soil microbiology, built large-scale Rube Goldberg machines, and occasionally published short fiction. In my spare time, I write and read speculative fiction, hike, and play tabletop games.
|Oct 24, 2023||Excited to announce some new work going to EMNLP: a qualitative study of the NLP community (main); a system for distilling a model from a single textual instruction (demo); and an analysis paper about Minimum Bayes Risk decoding, (Big Picture workshop)! Looking forward to seeing folks in Singapore.|
|Jun 6, 2023||Check out our recent preprints: Unlimiformer, a long-range transformer and a survey on human feedback for generation! (Update September 2023: Unlimiformer was accepted to NeurIPS, and this survey was accepted to TACL!)|
|Dec 7, 2022||I’ll be presenting our Findings paper on style transfer for dialogue summarization in the GEM poster session at EMNLP 2022!|
|Jul 15, 2022||I co-presented work on bias transfer from pretraining datasets at the Gender Bias in NLP workshop at NAACL 2022!|
|Nov 11, 2021||I presented my undergraduate thesis work on promotional content detection at the 2021 Workshop on Noisy User-generated Text!|
EMNLPTo Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language ProcessingIn Empirical Methods in Natural Language Processing. 2023
Big PictureIt’s MBR All the Way Down: Modern Generation Techniques Through the Lens of Minimum Bayes RiskIn Proceedings of the First Big Picture Workshop. 2023
NeurIPSUnlimiformer: Long-Range Transformers with Unlimited Length InputIn Conference on Neural Information Processing Systems. 2023
TACLBridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language GenerationIn Transactions of the Association of Computational Linguistics. 2023
EMNLP DemoPrompt2Model: Generating Deployable Models from Natural Language InstructionsIn Empirical Methods in Natural Language Processing: Demo Track. 2023