There are many possibilities for using machine learning on email to help users save time and accomplish their goals -- e.g., spam classification, reply prediction, and message categorization. However, building a separate model for each task is inefficient because each model generates its own internal representation of messages. A general-purpose representation could eliminate this redundant computation and be used for many downstream tasks. We train encoder-decoder neural networks on self-supervised mail tasks and generate representations of new mail messages as the encoder output of these networks. Simple models for downstream tasks can then be trained on the representations. We illustrate this method on a pilot task of RSVP classification and find the general-purpose representation performs similarly to a model built specifically for this task.
KIRSTIN EARLY is a research scientist at Oath, where she works on dialog management for an intelligent personal assistant to plan conversations so the system can accomplish the user’s goals. She earned her PhD from the Machine Learning Department at Carnegie Mellon University in 2017. Her thesis work spanned the disciplines of machine learning, human-computer interaction, and survey methodology to develop adaptive methods for gathering data from people and systems while keeping costs low. In 2012 Early graduated from Vanderbilt University, where she majored in computer science, classics, and math. @oath
The AI Seminar is generously sponsored by Apple.