Unabridged Learning Rich Caruana Joseph O'Sullivan Lately we've been playing with a half-baked machine learning idea we are calling "unabridged learning". Our goal is to make learning more robust in the face of unexpected input data, by learning more complete models from the available input features. In this talk, we will try to define unabridged learning, present concrete examples of abridged vs unabridged learning, and discuss methods for constructing unabridged learners. Unabridged learning grew out of our research in inductive transfer. We think it's an exciting new idea that looks promising, but we haven't carried it very far yet. At the seminar we'd like to incite discussion as to whether this is an interesting approach, what alternatives should be considered, and what learners already do some form of unabridged learning. AND, as an added bonus, we promise an eat-n-park cookie to anyone who can come up with a better name than "unabridged".