SPEAKER: Leonid Kontorovich TIME: Wednesday 12-1pm, February 21, 2007 PLACE: NSH 1507 TITLE: Kernel Methods for Learning Languages ABSTRACT: We present a novel paradigm for learning formal languages from positive and negative examples. The technique consists of mapping the strings to an appropriate high-dimensional feature space and constructing a separating hyperplane in that space. We initiate the study of the linear separability of automata and languages by examining the rich class of piecewise-testable languages. For this language family, we give a kernel that (1) renders it linearly separable and (2) is efficiently computable. We also give a universal kernel that renders all the regular languages linearly separable. We are not able to compute this kernel efficiently and conjecture that it is intractable, but we do have an efficient $\eps$- approximation. Joint work with Corinna Cortes and Mehryar Mohri.