Exploiting document structure and feature hierarchy for semi-supervised domain adaptation (by Andrew Arnold, Ramesh Nallapati and William W. Cohen @ ACL and CIKM 2008)

Abstract

In this work we try to bridge the gap often encountered by researchers who find themselves with few or no labeled examples from their desired target domain, yet still have access to large amounts of labeled data from other related, but distinct source domains, and seemingly no way to transfer knowledge from one to the other.

Experimentally, we focus on the problem of extracting protein mentions from academic publications in the field of biology, where the source domain data are abstracts labeled with protein mentions, and the target domain data are wholly unlabeled captions. We mine the large number of such full text articles freely available on the Internet in order to supplement the limited amount of annotated data available.

By exploiting the explicit and implicit common structure of the different subsections of these documents, including the unlabeled full text, we are able to generate robust features that are insensitive to changes in marginal and conditional distributions of classes and data across domains. We supplement these domain-insensitive features with automatically obtained high-confidence positive and negative predictions on the target domain to learn extractors that generalize well from one section of a document to another. Similarly, we develop a novel hierarchical prior structure over the features motivated by the common structure of feature spaces for this task across natural language data sets. Finally, lacking labeled target testing data, we employ comparative user preference studies to evaluate the relative performance of the proposed methods with respect to existing baselines.

Bio

Venue, Date, and Time

Venue: NSH 1507

Date: Monday, September 29, 2008

Time: 12:00 noon

Slides