I am a second-year masters student in the Language Technologies Institute at Carnegie Mellon University, advised by Matt Gormley. I also work closely with members of NeuLab.
I work on summarization and other conditional text generation tasks. My research interests include better ways to reason over large quantities of knowledge, model large-scale structure in text, and effectively integrate external knowledge into models. Currently, my work is in long-doc and multi-doc summarization.
I’m also broadly interested in meta-analysis of the NLP community, including critically examining the benchmarks, datasets, and modeling choices we take as defaults. Right now, some great collaborators and I are interviewing people about paradigm shifts in NLP; if you’ve published 3+ papers in NLP-related venues and you’d have time for a 60 minute interview, please reach out!
Before coming to CMU, I received my bachelors in math and computer science from the University of Arizona, where I was advised by Steven Bethard.
In my spare time, I write and read speculative fiction and play tabletop games.
|Jun 6, 2023||Check out our recent preprints: Unlimiformer, a long-range transformer and a survey on human feedback for generation!|
|Dec 7, 2022||I’ll be presenting our Findings paper on style transfer for dialogue summarization in the GEM poster session at EMNLP 2022!|
|Jul 15, 2022||I co-presented work on bias transfer from pretraining datasets at the Gender Bias in NLP workshop at NAACL 2022.|
|Nov 11, 2021||I presented my undergraduate thesis work on bias detection at the 2021 Workshop on Noisy User-generated Text!|
preprintUnlimiformer: Long-Range Transformers with Unlimited Length Input2023
preprintBridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation2023