Learning Robust Multiclass Multi-Label Text Classifiers
In this talk, we will propose a multiclass multi-label (MCML) classification approach to text categorization (TC). To fully take advantage of both positive and negative training examples, a maximal figure-of-merit (MFoM) learning algorithm is introduced to train high performance MCML classifiers. In contrast to conventional binary classification, the proposed scheme assigns a uniform score function to each category for each given test sample, and thus the classical Bayes decision rules can now be applied. Since all the MCML MFoM classifiers are simultaneously trained, we expect them to be more robust and work better than other binary classifiers that are trained separately. Experimental results on the TC benchmark dataset will be shown at the end of the talk. Particularly, the proposed approach can be applied to other domains where learning robust MCML classifiers is desired.
Communication complexity as a lower bound for learning in games
by Vincent Conitzer and Tuomas Sandholm
A fast-growing body of research in the AI and machine learning communities addresses learning in games, where there are multiple learners with different interests. This research adds to more established research on learning in games conducted in economics. In part because of a clash of fields, there are widely varying requirements on learning algorithms in this domain. The goal of this paper is to demonstrate how communication complexity can be used as a lower bound on the required learning time or cost. Because this lower bound does not assume any requirements on the learning algorithm, it is universal, applying under any set of requirements on the learning algorithm.
We characterize exactly the communication complexity of various solution concepts from game theory, namely Nash equilibrium, iterated dominant strategies (both strict and weak), and backwards induction. This gives the tighest lower bounds on learning in games that can be obtained with this method.
Kernel Conditional Random Fields: Representation and Clique Selection
Kernel conditional random fields (KCRFs) are introduced as a framework for discriminative modeling of graph-structured data. A representer theorem for conditional graphical models is given which shows how kernel conditional random fields arise from risk minimization procedures defined using Mercer kernels on labeled graphs. A procedure for greedily selecting cliques in the dual representation is then proposed, which allows sparse representations. By incorporating kernels and implicit feature spaces into conditional graphicalmodels, the framework enables semi-supervised learning algorithms for structured data through the use of graph kernels. The framework and clique selection methods are demonstrated in synthetic data experiments, and are also applied to the problem of protein secondary structure prediction.
Back to the Main Page
Pradeep Ravikumar Last modified: Fri Oct 1 10:14:01 EDT 2004