2014 Fall Semester
Nov 17
Policy Search with Probabilistic Safety Guarantees: An Algorithm and its Application to Digital Marketing Philip ThomasUMass Amherst

Abstract

One of the fundamental challenges in reinforcement learning (RL) is to guarantee that a newly proposed policy that has not yet been deployed will be an improvement upon the current policy---that the RL algorithm is "safe". Such an algorithm would be a significant step towards widespread application of RL to real-life problems where deployment of an RL algorithm can be costly or dangerous if the policies it proposes perform worse than the current policy.

I will discuss my recent work (in collaboration with Adobe Research) on such a safe policy search algorithm. The viability of our approach hinges on a new concentration inequality particularly well suited to this application. I will present results from a large real-world digital marketing application before concluding with discussion of potential future research directions.

Bio: Philip is a PhD student supervised by Professor Andrew Barto at the University of Massachusetts Amherst. During his PhD he has been an intern with Adobe Research and Kenji Doya's Neural Computation Unit. He received his B.S. and M.S. in computer science from Case Western Reserve University, where he was supervised by Professor Michael Branicky and where he was a member of Team Case in the DARPA Urban Challenge.

Oct 27
Communication efficient algorithms for distributed unsupervised learning Yingyu (Bruce) LiangComputer Science, Princeton

Abstract

We study the distributed computing setting in which there are multiple servers, each holding a set of points, who wish to compute functions on the union of their point sets. We first discuss two popular center-based clustering objectives, k-median and k-means. Following a classic approach in clustering by Har-Peled and Mazumdar, we reduce the problem of finding a clustering with low cost to the problem of finding a coreset (i.e., a summary) of small size. We provide a distributed method for constructing a global coreset which improves over the previous methods by reducing the communication cost.

We then consider Principal Component Analysis (PCA) where the servers would like to compute a low dimensional subspace capturing as much of the variance of the union of their point sets as possible. We provide a computation and communication efficient algorithm for distributed PCA and also demonstrate how this can be used to further improve the communication and computational costs of k-means clustering and related problems. Our empirical study on real world data shows a speedup of orders of magnitude, preserving communication with only a negligible degradation in solution quality.

Oct 20
Asymptotically Exact, Embarrassingly Parallel MCMC Willie NeiswangerMachine Learning Dept, CMU

Abstract

Communication costs, resulting from synchronization requirements during learning, can greatly slow down many parallel machine learning algorithms. We present a parallel Markov chain Monte Carlo (MCMC) algorithm in which subsets of data are processed independently, with very little communication. First, we arbitrarily partition data onto multiple machines. Then, on each machine, any classical MCMC method may be used to draw samples from a posterior distribution given the data subset. Finally, the samples from each machine are combined to form samples from the full posterior. This embarrassingly parallel algorithm allows each machine to act independently on a subset of the data (without communication) until the final combination stage. We prove that our algorithm generates asymptotically exact samples and empirically demonstrate its ability to parallelize burn-in and sampling in several models.

This is joint work with Chong Wang and Eric Xing.

This is joint work with Ryan Tibshirani (Statistics and ML, Carnegie Mellon University).

Oct 13
A Case for Trend Filtering Aaditya RamdasMachine Learning Dept, CMU

Abstract

This talk will be about a common practical problem - estimating piecewise constant/linear/quadratic fits to low-dimensional data. I will first introduce Trend Filtering, a recently proposed tool for this problem, and compare it to the popular smoothing splines. Given its theoretical optimality (that I will briefly touch upon), the only roadblock to using it in practice is having robust and efficient algorithms. We take a major step in overcoming this problem with a state-of-the-art specialized "ADMM" algorithm. Furthermore, the proposed implementation is very simple, and importantly, it is flexible enough to extend to many interesting related problems, such as sparse trend filtering and isotonic trend filtering, as I will demonstrate with an application to a neuroscience problem.

Software for our method is easy to use, and it is made freely available, highly optimized in C++, and interfaced through an R package (function "trendfilter" in package "genlasso").

This is joint work with Ryan Tibshirani (Statistics and ML, Carnegie Mellon University).

Oct 6
Going to the Roots of Dependency Parsing Miguel BallesterosPompeu Fabra University

Abstract

Dependency trees used in syntactic parsing often include a root node representing a dummy word prefixed or suffixed to the sentence, a device that is generally considered a mere technical convenience and is tacitly assumed to have no impact on empirical results. We demonstrate that this assumption is false and that the accuracy of data-driven dependency parsers can in fact be sensitive to the existence and placement of the dummy root node. In particular, we show that a greedy, left-to-right, arc-eager transition-based parser consistently performs worse when the dummy root node is placed at the beginning of the sentence (following the current convention in data-driven dependency parsing) than when it is placed at the end or omitted completely. Control experiments with an arc-standard transition-based parser and an arc-factored graph-based parser reveal no consistent preferences but nevertheless exhibit considerable variation in results depending on root placement. We conclude that the treatment of dummy root nodes in data-driven dependency parsing is an underestimated source of variation in experiments and may also be a parameter worth tuning for some parsers.

Miguel is a Visiting lecturer - Postdoc in Pompeu Fabra University, Barcelona, Spain. He works on natural language processing and machine learning with a special interest on linguistic structure prediction problems, such as dependency parsing and phrase structure parsing. He completed his BsC, MsC and PhD at the Universidad Complutense de Madrid. During the last years, he was a Visiting Researcher in Universities of Uppsala, Birmingham and Singapore.

Sept 29
Learning economic parameters from revealed preferences Ruth UrnerMachine Learning Dept, CMU

Abstract

A common assumption in Economics is that agents are utility maximizers: an agent, facing prices for a set of goods, will choose to buy the bundle of goods that she most prefers among all bundles that she can afford, according to some concave, non-decreasing utility function. In the classical revealed preference analysis, the goal is to fit some concave, non-decreasing model to data from an agents past purchases. However, due to the richness of this model class, this approach will not generalize to predict future purchases well. A recent line of work, starting with Beigman and Vohra (2006) and Zadimoghaddam and Roth (2012), has addressed this issue by suggesting to learn restricted classes of utility function from revealed preference data.

This talk will present recent advances in this line of work. We provide sample complexity guarantees and efficient algorithms for learning linear utility functions from revealed preference data. At a technical level, our work establishes connections between learning from revealed preferences and problems of multi-class learning, combining recent advances on intrinsic sample complexity of multi-class learning based on compression schemes with a new algorithmic analysis yielding time- and sample-efficient procedures. Our technique yields numerous generalizations including the ability to learn other well-studied classes of utility functions, to deal with a misspecified model, and with non-linear prices. We believe it may lead to new solutions to a variety of learning problems in economic and game theoretic contexts.

This talk is based on a joint work with Nina Balcan, Amit Daniely, Ruta Mehta and Vijay V. Vazirani, that will appear at WINE '14.

Sept 22
Data mining without "==": mining discriminative patches from unlabeled and weakly-labeled image collections Carl DoerschMachine Learning Dept, CMU

Abstract

Datamining--i.e. finding repeated, informative patterns in large datasets--has proven extremely difficult for visual data. A key issue in visual data mining is the lack of a reliable way to tell whether two images or image patches even depict the same thing. In this talk, I'll cover two recent works which can successfully mine discriminative sets of image patches in both weakly-supervised and fully unsupervised settings.

Our first work proposes discriminative mode seeking, an extension of Mean Shift to weakly-labeled data. Instead of finding the local maxima of a density, we exploit the weak label to partition the data into two sets and find the maxima of the density ratio. Given a dataset of image patches, and weak labels such as scene categories, these 'discriminative modes' correspond to remarkably meaningful visual patterns, including objects and object parts. Using these discriminative patches as an image representation, we obtain state-of-the-art results on a challenging indoor scene classification benchmark.

In the second part of the talk, I will discuss how we can extend this formulation to a fully unsupervised setting. Instead of using weak labels as supervision, we use the ability of an object patch to predict the rest of the object (its context) as supervisory signal to help discover visually consistent object clusters. The proposed method outperforms previous unsupervised as well as weakly-supervised object discovery approaches, and is shown to provide correspondences detailed enough to transfer keypoint annotations, even for extremely difficult datasets intended for benchmarking fully supervised object detection algorithms (e.g. Pascal VOC).

Sept 15
Privacy Preserving Neural Network Prediction on Encrypted Data Pengtao XieLanguage Technology Institute, CMU

Abstract

Recently, many cloud based machine learning (ML) services have been launched, including Microsoft Azure Machine Learning, GraphLab, Google Prediction API and Ersatz Labs. Cloud ML makes machine learning very easy to use for common users. However, it invades the privacy and security of users' data. How to protect users' privacy in cloud ML is a big challenge. In this work, we focus on neural network which is a backbone model in machine learning, and investigated how to perform privacy-preserving neural network prediction on encrypted data. Users encrypt their data before uploading them to the cloud. Cloud performs neural network predictions over the encrypted data and obtains the results which are also in encrypted form that the cloud cannot decipher. The encrypted results are sent back to users and users do the decryption to get the plaintext results. In this process, cloud never knows users' input data and output results since they are both encrypted. This achieves a strong protection of users' privacy. Meanwhile, with the help of homomorphic encryption, predictions made on encrypted data are nearly the same as those on plaintext data. The predictive performance of neural network is guaranteed.

Sept 8
On the Data Requirement for Learning the Tree of Life from Multiple Genes Gautam DasarathyMachine Learning Dept, CMU

Abstract

Inferring the "tree of life" from genetic data is an important problem in evolutionary biology. It has been customary to think about this problem as one of learning the evolutionary history of a single gene. However, individual genes might have evolutionary histories that are (topologically) distinct from each other and, of course, from the underlying tree of life. In this talk, I will discuss a probabilistic model of evolution that takes this into account. I will then outline our recent theoretical explorations into questions such as reliable algorithms for learning the tree of life from several genes and the amount of data required for such algorithms to succeed.

This is joint work with Rob Nowak and Sebastien Roch.

2014 Spring Semester
May 12
Interpretable Semantic Vectors from a Joint Model of Brain- and Text- Based Meaning Alona FysheMachine Learning Dept, CMU

Abstract

Vector space models (VSMs) represent word meanings as points in a high dimensional space. VSMs are typically created using a large text corpora, and so represent word semantics as observed in text. We present a new algorithm (JNNSE) that can incorporate a measure of semantics not previously used to create VSMs: brain activation data recorded while people read words. The resulting model takes advantage of the complementary strengths and weaknesses of corpus and brain activation data to give a more complete representation of semantics. Evaluations show that the model 1) matches a behavioral measure of semantics more closely, 2) can be used to predict corpus data for unseen words and 3) has predictive power that generalizes across brain imaging technologies and across subjects. We believe that the model is thus a more faithful representation of mental vocabularies.

Joint work with Partha Talukdar, Brian Murphy and Tom Mitchell

May 5
Estimating Accuracy from Unlabeled Data Anthony PlataniosMachine Learning Dept, CMU

Abstract

We consider the question of how unlabeled data can be used to estimate the true accuracy of learned classifiers. This is an important question for any autonomous learning system that must estimate its accuracy without supervision, and also when classifiers trained from one data distribution must be applied to a new distribution (e.g., document classifiers trained on one text corpus are to be applied to a second corpus). We show how to accurately estimate error rate from unlabeled data when given a collection of competing classifiers that make independent errors, based on the agreement rates between subsets of these classifiers. We further show that even when the classifiers do not make independent errors, both their accuracies and error dependencies can be estimated in a multitask learning setting under practical assumptions. Experiments on two data sets demonstrate accurate estimates of accuracy from unlabeled data. These results are of practical significance in situations where labeled data is scarce, and shed light on the more general question of how the consistency among multiple functions is related to their true accuracies.

April 28
Imitation Learning for Natural Language Direction Following through Unknown Environments Felix DuvalletRobotics Institute, CMU

Abstract

Commanding robots through unconstrained natural language directions is intuitive, flexible, and does not require specialized interfaces or training. Providing this capability would enable effortless coordination in human robot teams that operate in non-specialized environments. However, natural language direction following through unknown environments requires understanding the meaning of language, using a partial semantic world model to generate actions in the world, and reasoning about the environment and landmarks that have not yet been detected.

We address the problem of robots following natural language directions through complex unknown environments. By exploiting the structure of spatial language, we can frame direction following as a problem of sequential decision making under uncertainty. We learn a policy using imitation learning from demonstrations of people following directions. The trained policy predicts a sequence of actions that follow the directions, explores the environment (discovering new landmarks), backtracks when necessary, and explicitly declares when it has reached its destination. By training explicitly in unknown environments we can generalize to situations that have not been encountered previously.

This is work with Anthony Stentz (CMU), Tom Kollar (CMU), Matt Walter (MIT), Tom Howard (MIT), and Sachi Hemachandra (MIT).

April 21
AUC dominant unsupervised ensemble of binary classifiers Priyanka AgarwalIBM Research, India

Abstract

Ensemble methods are widely used in practice with the hope of obtaining better predictive performance than could be obtained from any of the constituent classifiers in the ensemble. Most of the existing literature is concerned with learning ensembles in a supervised setting. In this paper we propose an unsupervised iterative algorithm to combine the discriminant scores from different binary classifiers. We prove that (under certain assumptions) the Area Under the ROC Curve (AUC) of the resulting ensemble is greater than or equal to the AUC of the best classifier (with maximum AUC). We also experimentally validate this claim on a number of datasets and also show that the performance is better than the supervised ensembles.

April 14
Inferring Political Ideologies from Text Yanchuan SimSchool of Computer Science, CMU

Abstract

In this talk, I will present some of my recent work with my collaborators on building models for inferring political ideologies from text. Given political candidate speeches from the 2008 and 2012 US elections, we seek to measure their ideological positioning. To accomplish this, we infer ideological cues from a corpus of political writings annotated with known ideologies. We then represent the speeches of U.S. presidential candidates as sequences of cues and lags (filler distinguished only by its length in words). We apply a domain-informed Bayesian HMM to infer the proportions of ideologies each candidate uses in each campaign. The results are validated against a set of preregistered, domain expert authored hypotheses. I will also present some preliminary results on our work with briefs and data from the US Supreme Court, studying the latent behaviors of amicus brief filers from a utility maximizing perspective.

April 7
Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch Partha TalukdarSchool of Computer Science, CMU

Abstract

Graph-based Semi-supervised learning (SSL) algorithms have been successfully used in a large number of applications. These methods classify initially unlabeled nodes by propagating label information over the structure of graph starting from seed nodes. Graph-based SSL algorithms usually scale linearly with the number of edges (|E|) and also in the number of distinct labels (m), and require O(m) space on each node. Unfortunately, there exist many applications of practical significance with very large m over large graphs, demanding better space and time complexity. In this talk, we propose MAD-Sketch, a novel graph-based SSL algorithm which compactly stores label distribution on each node using Count-min Sketch, a randomized data structure. We present theoretical analysis showing that under mild conditions, MAD-Sketch can reduce space complexity at each node from O(m) to O(log m), and achieve similar savings in time complexity as well. We support our analysis through experiments on multiple real world datasets. We observe that MAD-Sketch achieves similar performance as existing state-of-the-art graph-based SSL algorithms, while requiring smaller memory footprint and at the same time achieving up to 10x speedup. We find that MAD-Sketch is able to scale to datasets with one million labels, which is beyond the scope of existing graph-based SSL algorithms.

Joint work with William Cohen (CMU). Paper URL

Mar 31
On The Algorithmic and System Interface of BIG LEARNING Eric XingSchool of Computer Science, CMU

Abstract

In many modern applications built on massive data and using high-dimensional models, such as web-scale content extraction via topic models, genome-wide association mapping via sparse regression, and image understanding via deep neural networks, one needs to handle BIG machine learning problems that threaten to exceed the limit of current infrastructures and algorithms. While ML community continues to strive for new scalable algorithms, and several attempts on developing new system architectures for BIG ML have emerged to address the challenge on the backend, good dialogs between ML and system remain difficult --- most algorithmic research remain disconnected from the real system/data they are to face; and the generality, programmability, and theoretical guarantee of most systems on ML programs remain largely unclear. In this talk, I will present Petuum -- a general-purpose framework for distributed machine learning, and demonstrate how innovations in scalable algorithms and distributed systems design work in concert to achieve multiple orders of magnitude of scalability on a modest cluster for a wide range of large scale problems in social network (mixed-membership inference on 40M node), personalized genome medicine (sparse regression on 100M dimensions), and computer vision (classification over 20K labels), with provable guarantee on correctness of distributed inference.

Mar 24
Recovering Block-structured Activations Using Compressive Measurements Mladen KolarBooth School of Business, University of Chicago

Abstract

In recent years, with the advancement of large-scale data acquisition technology in various engineering, scientific, and socio-economical domains, traditional machine learning and statistical methods have started to struggle with massive amounts of increasingly high-dimensional data. Luckily, in many problems there is a simple structure underlying high-dimensional data that can be exploited to make learning feasible.

In this talk, I will focus on the problem of detection and localization of a contiguous block of weak activation in a large matrix, from a small number of noisy, possibly adaptive, compressive measurements. This is closely related to the problem of compressed sensing, where the task is to estimate a sparse vector using a small number of linear measurements. Contrary to results in compressed sensing, where it has been shown that neither adaptivity nor contiguous structure help much, we show that for reliable localization the magnitude of the weakest signals is strongly influenced by both structure and the ability to choose measurements adaptively while for detection neither adaptivity nor structure reduce the requirement on the magnitude of the signal. We characterize the precise tradeoffs between the various problem parameters, the signal strength and the number of measurements required to reliably detect and localize the block of activation. The sufficient conditions are complemented with information theoretic lower bounds.

Joint work with Sivaraman Balakrishnan, Alessandro Rinaldo and Aarti Singh.

Mar 17
Conditional Random Field Autoencoders for Unsupervised NLP Chris DyerSchool of Computer Science, CMU

Abstract

Human language is the result of cognitive processes whose contours are---at best---incompletely understood. Given the incomplete information we have about the processes involved, the frequently disappointing results obtained from attempts to use unsupervised learning to uncover latent linguistic structures (e.g., part-of-speech sequences, syntax trees, or word alignments in parallel data) can be attributed---in large part---to model misspecification.

This work introduces a novel framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is generated, conditional on the latent structure, as drawn from cheaply-estimated multinomials. The autoencoder structure enables efficient inference without unrealistic independence assumptions, enabling us to incorporate the often conflicting, overlapping theories (in the form of hand-crafted features) about how latent structures relate to observed data in a coherent model. We contrast our approach with traditional joint unsupervised models that are learned to maximize the marginal likelihood of observed data. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model is substantially more efficient than training feature-rich models.

This is joint work with Waleed Ammar and Noah Smith.

Mar 10
More effective distributed ML via a stale synchronous parallel parameter server Qirong HoSchool of Computer Science, CMU

Abstract

Modern applications awaiting next generation machine intelligence systems have posed unprecedented scalability challenges. These scalability needs arise from at least two aspects: 1) massive data volume, such as societal-scale social graphs with up to hundreds of millions of nodes; and 2) massive model size, such as the Google Brain deep neural network containing billions of parameters. Although there exist means and theories to support reductionist approaches like subsampling data or using small models, there is an imperative need for sound and effective distributed ML methodologies for users who cannot be well-served by such shortcuts. To this end, we propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model's values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes.

Feb 24
Spectral Probabilistic Modeling of Natural Language Ankur ParikhSchool of Computer Science, CMU

Abstract

Being able to effectively model latent structure in data is a key challenge in modern AI research, particularly in Natural Language Processing (NLP) where it is crucial to discover and leverage syntactic and semantic relationships that may not be explicitly annotated in the training set. Unfortunately, while incorporating latent variables to represent hidden structure can substantially increase representation power, the key problems of model design and learning become significantly more complicated. For example, unlike fully observed models, latent variable models can suffer from non-identifiability, making it difficult to distinguish the desired latent structure from the others. Moreover, learning is usually formulated as a non-convex optimization problem, leading to the use of local search methods that may become trapped in local optima.

In this talk, we take a different perspective and approach two key problems in NLP, unsupervised parsing and language models, through the lens of linear algebra. By exploiting the connection between hidden variables and low rank factorization, we propose a method for unsupervised constituent parsing that has theoretical guarantees on latent structure recovery. Empirically our approach performs favorably to the Constituent Context Model of Klein and Manning (2002) without the need for careful initialization.

In our on-going work in language modeling, we leverage matrix factorization to generalize existing n-gram language models to non-integer n. Our method includes existing smoothing methods such as Absolute Discounting and Kneser Ney as special cases, and gives us noticeable improvements in perplexity over state-of-the-art Kneser Ney baselines.

This is joint work with Shay Cohen, Avneesh Saluja, Chris Dyer, and Eric Xing.

Feb 17
L1 optimization beyond quadratic loss: algorithms and applications in graphical models, control, and energy systems Zico KolterSchool of Computer Science, CMU

Abstract

In this talk I will try to convince all the fervent believers in proximal gradient methods, ADMM, or coordinate descent that there is a better method for optimizing general (smooth) objectives with an L1 penalty: Newton coordinate descent. The method is the current state-of-the-art in tasks like sparse inverse covariance estimation, and I will highlight two examples from my group's work that use this approach to achieve substantial speedups over existing algorithms. In particular, I will discuss how we use this algorithm to learn sparse Gaussian conditional random field models (applied to energy forecasting), and to design sparse optimal control laws (applied to distributed control in a smart grid).

Feb 13
Untitled Glenn FungAmazon

Abstract

Being part of the Machine Learning team at Amazon is one of the most exciting engineering job opportunities in the world today. Our Machine Learning (ML) team is comprised of technical leaders with different backgrounds who create and develop novel and infinitely-scalable applications that optimize Amazon’s systems using cutting edge machine learning techniques. We develop innovative algorithms that model patterns within data to drive automated decisions at scale in all corners of the company, including our e-Commerce site and subsidiaries, Amazon Web Services, Seller and Buyer Services and Digital Media including Kindle. In this talk, I will give you an overview of some of the technical challenges we face and I will present some examples of ML-based solutions that had a significant impact inside the company.

Glenn Fung received a B.S. in pure mathematics from Universidad Lisandro Alvarado in Barquisimeto, Venezuela. He then earned an M.S. in applied mathematics from Universidad Simon Bolivar, Caracas, Venezuela, where later he worked as an assistant professor for two years. He also earned an M.S. degree and a Ph. D. degree in computer sciences from the University of Wisconsin-Madison. His main interests are optimization approaches to machine learning and data mining, with emphasis in kernel methods. In the summer of 2003 he joined the computer aided diagnosis group at Siemens Healthcare in Malvern, PA where he worked for 10 years developing and applying novel machine learning techniques to solve challenging problems that arise in the medical domain. In September of 2013 he joined the Amazon where he has been working on applying novel machine learning techniques to solve challenging problems that arise in e-commerce retail.

Feb 10
Large Scale Inference of Determinantal Point Processes (DPPs) Raja Hafiz AffandiUniversity of Pennsylvania

Abstract

Determinantal Point Processes (DPPs) are random point processes well-suited for modelling repulsion. In machine learning and statistics, DPPs are a natural model for subset selection problems where diversity is desired. For example, they can be used to select diverse sets of sentences to form document summaries, or to return relevant but varied text and image search results, or to detect non-overlapping multiple object trajectories in video. Among many remarkable properties, they offer tractable algorithms for exact inference, including computing marginals, computing conditional probabilities, and sampling. In our recent work, we extended these algorithms to approximately infer non-linear DPPs defined over a large amount of data, as well as DPPs defined on continuous spaces using low-rank approximations. We demonstrated the advantages of our models on several machine learning and statistical tasks: motion capture video summarization, repulsive mixture modelling and synthesizing diverse human poses. Given time, I will also briefly touch on our other related works such as extending DPPs into a temporal process that sequentially select multiple diverse subsets across time and how we go about learning the parameters of a DPP kernel. These are joint works with Emily Fox, Ben Taskar and Alex Kulesza.

Feb 03
Learning with a Drifting Target Concept Liu YangSchool of Computer Science, CMU

Abstract

We study the problem of learning in the presence of a drifting target concept. Specifically, we provide bounds on the expected number of mistakes on a sequence of i.i.d. points, labeled according to a target concept that can change by a given amount on each round. Some of the results also describe an active learning variant of this setting, and provide bounds on the number of queries for the labels of points in the sequence sufficient to obtain the stated bounds on the number of mistakes.

This is joint work with Steve Hanneke and Varun Kanade.

Jan 27
Oblivious bounds on the probability of Boolean functions and the connection to query evaluation over probabilistic databases Wolfgang GatterbauerTepper School of Business, CMU

Abstract

We develop upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach dissociation and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables. Our motivation comes from the weighted model counting problem (or, equivalently, the problem of computing the probability of a Boolean function), which is #P-hard in general. By performing several dissociations, one can transform a Boolean formula whose probability is difficult to compute, into one whose probability is easy to compute, and which is guaranteed to provide an upper or lower bound on the probability of the original formula by choosing appropriate probabilities for the dissociated variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space. In the second part of the talk, we focus on the problem of query evaluation over probabilistic databases. We show how our theory allows a standard relational database management system (DBMS) to both upper and lower bound hard probabilistic queries in guaranteed polynomial time.

2013 Fall Semester
Dec 2
Ndapandula Nakashole Ndapandula NakasholeMachine Learning Department, CMU

Abstract

Equipping machines with knowledge, through the construction of machine-readable knowledge bases, presents a key asset for semantic search, question answering and other applications. In this talk, I will present methods for knowledge base construction. The first is a method for fact extraction. The second is a method for generating a large collection of binary relations organized by synonym sets and into a taxonomy based on relation subsumptions. The third is a method for discovering and semantically typing new entities as they emerge in dynamic Web sources such as news articles and social media.

Nov 25
Crowd Agents: A Top-Down Approach to Intelligent Interactive Systems Jeffrey P. BighamHuman-Computer Interaction Institute, CMU

Abstract

Over the past few years, I have been developing and deploying interactive crowd-powered systems that solve characteristic “hard” problems to help people get things done in their everyday lives. For instance, VizWiz answers visual questions for blind people in less than a minute, Legion drives robots in response to natural language commands, Chorus holds helpful conversations with human partners, and Scribe converts streaming speech to text in less than five seconds.

The future envisioned by my research is one in which the intelligent systems that we have dreamed about for decades, which have inspired generations of computer scientists from its beginning, are brought about for the benefit of people. My work illustrates a path for achieving this vision by leveraging the on-demand labor of people to fill in for components that we cannot currently automate, and by building frameworks that allow groups to do together what even expert individuals cannot do alone. A crowd-powered world may seem counter to the goals of computer science, but I believe that it is precisely by creating and deploying the systems of our dreams that will learn how to advance computer science to create the machines that will someday realize them.

Nov 18
Spectral Robotics Byron BootsUniversity of Washington

Abstract

A fundamental challenge in robotics is the so-called ``critter problem:’’ a robot, capable of performing actions and receiving observations, is placed in an unknown environment. The robot has no interpretation for its actions or observations and no knowledge of the structure of the environment. The problem is to program the robot to learn about its observations, actions, and environment well enough to make predictions of future observations given sequences of actions.

This modeling problem is especially challenging given the wide variety of sensors, actuators, and domains that are encountered in modern robotics. As a result, researchers have largely abandoned the critter problem and have instead focused on leveraging extensive domain knowledge to develop special-purpose tools for special cases of the problem: e.g., system identification to learn Kalman filters, body schema learning for manipulators, structure-from-motion in vision, or the many methods for simultaneous localization and mapping.

In this talk I will revisit the original critter problem from a modern machine learning perspective. I will discuss how spectral learning algorithms can unify disparate tools and special cases encountered in different sub-areas of robotics into a single general-purpose toolkit. Finally, I will show how spectral methods have achieved state-of-the-art performance on real-world robotics problems in several different domains.

Nov 11
Bayes net models of human reasoning Charles KempDepartment of Psychology, Carnegie Mellon University

Abstract

Bayes nets are not only useful for developing AI systems, but can also help to explain how humans reason under uncertainty. I will present a Bayes net framework for reasoning about multiple causal systems and will apply it to two aspects of human reasoning. The first application considers how people make inferences that depend on both causal relationships between features (e.g. animals with wings often fly) and similarity relationships between objects (e.g. eagles and hawks are rather similar). The second application explores how people make counterfactual inferences, or inferences about scenarios that differ from the real world in some respect.

Charles Kemp is an associate professor in CMU's psychology department. His research focuses on probabilistic models of human learning and inference.

Nov 4
Robot Motion for Seamless Human-Robot Collaboration Anca DraganRobotics Institute, Carnegie Mellon University

Abstract

In this talk, I will be summarizing our work on enabling robots to produce motion that is suitable for human-robot collaboration and co-existence. Most motion in robotics is purely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation. Collaboration, however, does not happen in isolation. In collaboration, the robot's motion has an observer, watching and interpreting the motion. In this work, we move beyond functional motion, and introduce the notion of an observer into motion planning, so that robots can generate motion that is mindful of how it will be interpreted by a human collaborator. We formalize predictability and legibility as properties of motion that naturally arise from the inferences that the observer makes, drawing on action interpretation theory in psychology. We propose models for these inferences based on the principle of rational action, and use a combination of constrained trajectory optimization and machine learning techniques to enable robots to plan motion for collaboration.

Bio: Anca Dragan is a PhD candidate at Carnegie Mellon's Robotics Institute, and a member of the Personal Robotics Lab. She was born in Romania and received her B.Sc. in Computer Science from Jacobs University Bremen in 2009. Her research lies at the intersection of robotics, machine learning, and human-computer interaction: she is interested in enabling robots to seamlessly work with and around people. Anca is an Intel PhD Fellow for 2013, a Google Anita Borg Scholar for 2012, and serves as General Chair in the Quality of Life Technology Center's student council.

Oct 28
Training nested functions using auxiliary coordinates Miguel Á. Carreira-PerpinanUniversity of California, Merced

Abstract

Many models in machine learning, computer vision or speech processing have the form of a sequence of nested, parameterized functions, such as a multilayer neural net, an object recognition pipeline, or a "wrapper" for feature selection. Joint estimation of the parameters of all the layers and selection of an optimal architecture is widely considered to be a difficult numerical nonconvex optimization problem, difficult to parallelize for execution in a distributed computation environment, and requiring significant human expert effort, which leads to suboptimal systems in practice. We describe a general mathematical strategy to learn the parameters and, to some extent, the architecture of nested systems, called the method of auxiliary coordinates (MAC). MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable (so computing gradients with the chain rule does not apply), and is competitive with state-of-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations.

If time permits, I will illustrate how to use MAC to derive training algorithms for a range of problems, such as deep nets, best-subset feature selection, joint dictionary and classifier learning, supervised dimensionality reduction, and others.

This is joint work with Weiran Wang.

BIOGRAPHY: Miguel Á. Carreira-Perpinan is an associate professor in Electrical Engineering and Computer Science at the University of California, Merced. He received the degree of "licenciado en informática" (MSc in computer science) from the Technical University of Madrid in 1995 and a PhD in computer science from the University of Sheffield in 2001. Prior to joining UC Merced, he did postdoctoral work at Georgetown University (in computational neuroscience) and the University of Toronto (in machine learning), and was an assistant professor at the Oregon Graduate Institute (Oregon Health and Science University). He is the recipient of an NSF CAREER award, a Google Faculty Research Award and a best student paper award at Interspeech. He is an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence and an area chair for NIPS. His research interests lie in machine learning, in particular unsupervised learning problems such as dimensionality reduction, clustering and denoising, with an emphasis on optimization aspects, and with applications to speech processing (e.g. articulatory inversion and model adaptation), computer vision, sensor networks and other areas.

Oct 21
Machine Learning for the Computational Humanities David BammanLanguage Technologies Institute

Abstract

While machine learning plays a significant role in the community of practice known as "computational social science", one area ripe for interdisciplinary work but fraught with its own challenges are the humanities, which encompass such domains as English, Literary Studies, History and Archaeology (among many others). These areas have a long history of engaging with quantitative and computational methods (pre-dating modern notions of the "digital humanities"), and offer a fascinating, complex proving ground for classic ML problems of learning and inference.

In this talk, I will discuss recent and ongoing work into two probabilistic latent variable models that fall in this domain: the first is a model for inferring character types (or "personas") in text, where a "persona" is defined as a set of mixtures over fine-grained latent lexical classes. These lexical classes capture the stereotypical actions of which a character is the agent and patient (villains "kill" and "are foiled"), as well as the attributes by which they are described (e.g., "evil"); I present results applying this model to a collection of movie plot summaries.

The second model addresses the problem of jointly inferring the identity and social rank of members of an Old Assyrian trade network from the 2nd millennium BCE, leveraging evidence in the form of local, partial ranks over observed name mentions in cuneiform tablets to learn a global rank over (latent) individuals.

Researchers from both CS and the humanities are welcome.

Oct 14
Rango 2, University Center On the Computational and Statistical Interface and "Big Data" Michael I JordanUniversity of California, Berkeley

Abstract

The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blend the statistical and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in "Big Data" is apparent from their sharply divergent nature at an elementary level---in computer science, the growth of the number of data points is a source of "complexity" that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of "simplicity" in that inferences are generally stronger and asymptotic results can be invoked. Indeed, if data are a data analyst's principal resource, why should more data be burdensome in some sense? Shouldn't it be possible to exploit the increasing inferential strength of data at scale to keep computational complexity at bay? I present three research vignettes that pursue this theme, the first involving the deployment of resampling methods such as the bootstrap on parallel and distributed computing platforms, the second involving large-scale matrix completion, and the third introducing a methodology of "algorithmic weakening," whereby hierarchies of convex relaxations are used to control statistical risk as data accrue.

[Joint work with Venkat Chandrasekaran, Ariel Kleiner, Lester Mackey, Purna Sarkar, and Ameet Talwalkar].

Oct 07
Extracting Knowledge from Informal Text Alan RitterMachine Learning Department

Abstract

The internet has revolutionized the way we communicate, leading to a constant flood of informal text available in electronic format, including: email, Twitter, SMS and the clinical text found in electronic medical records. This presents a big opportunity for Natural Language Processing (NLP) and Information Extraction (IE) technology to enable new large scale data-analysis applications by extracting machine-processable information from unstructured text at scale.

In this talk I will discuss several challenges and opportunities which arise when applying NLP and IE to informal text, focusing specifically on Twitter, which has recently rose to prominence, challenging the mainstream news media as the dominant source of realtime information on current events. I will describe several NLP tools we have adapted to handle Twitter’s noisy style, and present a system which leverages these to automatically extract a calendar of popular events occurring in the near future.

I will further discuss fundamental challenges which arise when extracting meaning from such massive open-domain text corpora. Several probabilistic latent variable models will be presented, which are applied to infer the semantics of large numbers of words and phrases and also enable a principled and modular approach to extracting knowledge from large open-domain text corpora.

Sept 30
Discovering Vocabulary in Clinical Temporal Data with Application to Tracking Illness Severity in Infants Suchi SariaJohns Hopkins University

Abstract

Large amounts of data are routinely collected in the Electronic Medical Record yet their use in informing patient care is limited to manual assessment by the caregivers. In this talk, we discuss probabilistic approaches motivated by clinical practice for analyzing continuously measured physiologic data. We develop our models on data collected from instrumenting a neonatal intensive care unit. Based on insights derived from modeling this data, we tackle the application of risk stratification. As part of routine care, every infant at birth is risk stratified based on the Apgar score. We develop a cheap, non-invasive and simple risk stratification tool using markers from physiologic data for predicting infants at risk, dubbed by Science news as the modern electronic Apgar.

Bio: Suchi Saria is an Assistant Professor at Johns Hopkins University within the Schools of Engineering and Public Health. She received her PhD in machine learning from Stanford University. Her research interests span computational modeling of diverse, large temporal data, and in particular those from sensing devices and the electronic health record for improving patient management. Her work on predictive modeling for clinical data from infants has been covered by numerous national and international press sources. She is the recipient of multiple awards including a best student, a best paper finalist, Microsoft Full Scholarships, Rambus Corporation fellowship, an NSF Computing Innovation fellowship, and a Gordon and Betty Moore foundation award.

Sept 23
What does AdaBoost tell us about regularization? Matus TelgarskyRutgers UniversityMicrosoft Research NYC

Abstract

Regularized ERM is the engine driving numerous machine learning techniques such as SVMs and the LASSO, and is to thank for many modern results on fast optimization, model recovery, and noise tolerance. How then is it that the popular algorithm AdaBoost not only corresponds to an unregularized ERM problem, but moreover AdaBoost's optimization problem fails many of the basic sanity checks --- e.g., existence of minimizers --- provided by regularization?

This talk bridges this apparent rift by presenting the two theoretical guarantees substantiating the strong performance of AdaBoost --- convergence to the Bayes predictor in the general case, and fast convergence in the margin / weak-learnable case --- in a way that emphasizes the connection to regularization. In particular, in lieu of an algorithm-specified regularization parameter, the data itself constrains the behavior of the algorithm.

(Technical note: some results will focus on Lipschitz losses (e.g., the logistic loss), but the exponential loss will be discussed throughout.)

Sept 16
OpenEval: Web Information Query Evaluation Mehdi SamadiCarnegie Mellon UniversitySchool of Computer Science

Abstract

In the first part of this talk, we investigate information validation tasks that are initiated as queries from either automated agents or humans. We introduce OpenEval, a new online information validation technique, which uses information on the web to automatically evaluate the truth of queries that are stated as multi-argument predicate instances (e.g., DrugHasSideEffect(Aspirin, GI Bleeding))). OpenEval gets a small number of instances of a predicate as seed positive examples and automatically learns how to evaluate the truth of a new predicate instance by querying the web and processing the retrieved unstructured web pages. We show that OpenEval is able to respond to the queries within a limited amount of time while also achieving high F1 score. In addition, we show that the accuracy of responses provided by OpenEval is increased as more time is given for evaluation.

In the second part of the talk, we explain how OpenEval can be used to provide knowledge to anytime intelligent agents, in particular for a find-deliver task in a real mobile robot (CoBot), for a trip planner agent, and for the knowledge-on-demand system which is part of the NELL project.

These projects are joint work with Manuela Veloso, Manuel Blum, Thomas Kollar, and Tom Mitchell

Sept 09
Approximation of metric data by linear structures Frederic ChazalInvestors for the Digital World, Geometrica (INRIA)

Abstract

In many real-world applications, data come as discrete metric spaces sampled around 1-dimensional linear structures that can be seen as metric trees or graphs. In this talk we will consider the reconstruction problem of such filamentary structures from data sampled around them. We will present two reconstruction methods. The first one comes with topological and metric guarantees but relies on parameter choices that might be tricky. The second one comes with metric guarantees but is much less sensitive to parameter choices and can be very efficiently implemented.

These are joint works with M. Aanjaneya, F. Chazal, D. Chen, M. Glisse, L. J. Guibas, D. Morozov (Stanford University) and with Jian Sun (Mathematical Sciences Center, Tsinghua University).

2013 Spring Semester
May 13
A Bilinear Model for Text Regression Daniel Preotiuc-PietroUniversity of Sheffield, U.K

Abstract

Models of prediction from text have seen an increase in interest with the rise of Social Media. Applications vary from predicting political sentiment, financial indicators or flu rates. Most of the approaches treat the problem as linear regression, learning to relate word frequencies to the response variable.

In this talk I will present a bilinear approach to text regression with a sparse regulariser. This model learns a sparse representation of two distinct sets of variables, in our case the words and the users. Our method is inspired by the fact that most Social Media users posts are not indicative to our prediction task. Further, we explore a multi-task learning approach in order to exploit the relationship between the output variables.

May 06
Uniformity over the Integers Michael SpeceCarnegie Mellon UniversityMachine Learning Department

Abstract

Uniformity is a parsimonious description of randomness. Unfortunately, a probability measure can come nowhere close to approximating uniformity on unbounded sets. This talk takes up that problem in the setting of the integers. By replacing the countable additivity axiom of probability with finite additivity, one can assign to each integer the same probability. Because assigning to each integer the same probability does not uniquely define a uniform distribution, various additional properties have been proposed. In this talk, three such properties will be reviewed, and a new one introduced. It turns out these notions can be ordered so that one implies the next. While stronger notions may be required to attain uniqueness for certain applications, the weaker notions have the virtue of being more parsimonious and interpretable. Moreover, stronger properties may be decomposed into weaker ones, and the weaker notions may have special number theoretic properties. For instance, the family of uniform distributions introduced in this talk may assign positive probability to the set of prime numbers.

April 29
Near-Optimal Regret Bounds for Reinforcement Learning in Undiscounted Markov Decision Processes with Policy Advice Mohammad AzarCarnegie Mellon UniversitySchool of Computer Science

Abstract

In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sublinear regret of $\widetilde O(\sqrt{T})$ relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided.

Biography

April 22
Processing, Analyzing and Mining Big Graph Data Charalampos TsourakakisCarnegie Mellon UniversityDepartment of Mathematical Sciences

Abstract

Nowadays, the scale of graph data that needs to be processed is massive. For example, in the context of online services, the Web graph amounts to at least one trillion of links, and Facebook recently reported more than 1 billion of users and 140 billion of friend connections.

In the first part of the talk, we will discuss the balanced graph partitioning problem in the context of big dynamic graph data, a key problem to enable efficient solving of a wide range of computational tasks. There exist two widely-used families of heuristics for graph partitioning in the streaming setting: place the newly arrived vertex in the cluster with the largest number of neighbors or in the cluster with the least number of non-neighbors. We will present a framework that unifies these two seemingly orthogonal approaches and allows us to interpolate between them, obtaining a superior performance. Surprisingly, this performance is even comparable to non-streaming algorithms. For instance, for the Twitter graph with more than 1.4 billion of edges, our method partitions the graph in about 40 minutes achieving a balanced partition that cuts as few as 6.8% of edges, whereas it took more than 8$\tfrac{1}{2}$ hours by METIS to produce a balanced partition that cuts 11.98\% of edges. Finally, we provide an O(logk/k)-approximation algorithm and we show experimentally using Apache Giraph that the partitions we obtain result in significant gains in terms of the communication cost and runtime.

In the second half of the talk we will discuss the problem of finding dense subgraphs in large-scale graphs. We introduce a general framework which subsumes popular density functions and provide theoretical insights into our framework. We introduce a special instance of our framework as a density function which favors small, dense, small-diameter subgraphs. We provide an additive approximation algorithm, and a multiplicative approximation algorithm with tight guarantees. We also develop applications of our method in data mining and bioinformatic tasks, such as forming a successful team of domain experts and finding highly correlated genes from a microarray dataset.

Joint work with Christos Gkantsidis (MSR Research), Bozidar Radunovic (MSR Research), Milan Vojnovic (MSR Research) and Francesco Bonchi (Yahoo! Research), Aris Gionis (Aalto University), Francesco Gullo (Yahoo! Research), Maria Tsiarli (University of Pittsburgh).

April 15
Learning an Internal Dynamics Model from Control Demonstration Matt GolubCarnegie Mellon UniversityElectrical and Computer Engineering

Abstract

Much work in optimal control and inverse control has assumed that the controller has perfect knowledge of plant dynamics. However, if the controller is a human or animal subject, the subject’s internal dynamics model may differ from the true plant dynamics. Here, we consider the problem of learning the subject’s internal model from demonstrations of control and knowledge of task goals. Due to sensory feedback delay, the subject uses an internal model to generate an internal prediction of the current plant state, which may differ from the actual plant state. We develop a probabilistic framework and exact EM algorithm to jointly estimate the internal model, internal state trajectories, and feedback delay. We applied this framework to demonstrations by a nonhuman primate of brain-machine interface (BMI) control. We discovered that the subject’s internal model deviated from the true BMI plant dynamics and provided significantly better explanation of the recorded neural control signals than did the true plant dynamics.

April 8
Tracking Story Reading in the Brain Leila WehbeCarnegie Mellon UniversityMachine Learning Department

Abstract

Story comprehension is a rich and rapid phenomenon, requiring multiple simultaneous processes (e.g. letter recognition, word understanding, sentence parsing...). Our goal is to study how the brain processes this complex information, by modeling the fMRI brain activity during story reading at a close to normal speed. This is a challenging goal, one reason being the coarse time-resolution of fMRI and the lack of a comprehensive model of word meaning composition. Classically, fMRI has been used to localize brain areas that process specific elements of text processing, e.g. which areas are involved in syntactic processing, but not model how the brain represents different instances of these elements, e.g. how do those areas represent different syntactic structures.

We created a generative model that predicts the fMRI activity created when subjects read a complex story where the words are presented in a serial manner, for 0.5 seconds each. We then performed an exploratory analysis in which we tested several types of story features (e.g. word length, syntax, semantics, story characters) to search for a good basis of features for story comprehension. We found different patterns of representation in the brain for different types of features. These patterns align with the predictions from the field. To test the validity of our model, we performed a classification task that decodes what passage of the story a time segment of brain activity corresponds to. The classification accuracy was significantly higher than chance (p < 10^-6). Our approach has the advantage of being flexible: any feature of language can be added to the model and tested, and features can range from simple perceptual features, to compositional semantics, to higher order reasoning about narrative structure and story comprehension.

March 25
Theoretical Connections Between Convex Optimization and Active Learning Aaditya RamdasCarnegie Mellon UniversityMachine Learning Department

Abstract

Stochastic convex optimization (SCO) under the first order oracle model deals with approximately optimizing a convex function over a convex set, given access to noisy function and gradient values at any point, using as few queries as possible. Active learning of one-dimensional threshold (ALT) classifiers, is a classification problem that deals with approximately locating a "threshold" on a subinterval of the real line (which is a point to the left of which labels are more likely to be negative, and to the right of which labels are more likely to be positive), given access to an oracle that returns these noisy labels, using as few queries as possible.

Exploiting the sequential nature of both problems, we establish a concrete similarity between the "Tsybakov Noise Condition" from ALT theory and "Strong/Uniform Convexity Condition" from SCO theory, and show how information-theoretic lower-bound techniques from ALT can be used to get very similar lower bound rates in SCO (and show these are tight with matching upper bounds). Time permitting, I will also show a kind of algorithmic reduction from SCO to ALT for strongly/uniformly convex functions as well as a new adaptive ALT algorithm, that was inspired from a recent adaptive SCO algorithm.

Mar 18
Algorithms for sparsity-constrained optimization Sohail BahmanCarnegie Mellon UniversityDepartment of Electrical and Computer Engineering

Abstract

Estimation methods in high-dimensional linear models, as studied in compressed sensing and sparse linear regression, basically rely on minimization of a squared error subject to a sparsity constraint. In first part of this talk we examine the problem of sparsity-constrained minimization of non-quadratic objective functions that can arise in models with non-linearities. We propose a greedy algorithm for these problems and prove its accuracy objectives with "stable restricted hessian" or "stable restricted linearization".

In the second part of the talk, we inspect the non-convex $\ell_p$-constrained least squares. In particular, we obtain accuracy guarantees for the projected gradient descent method under the restricted isometry property. We further discuss the implication of the result and derive necessary conditions for projection onto an $\ell_p$ ball.

Mar 4
Sigma-Optimality for Active Learning on Gaussian Random Fields Yifei MaCarnegie Mellon UniversityMachine Learning Department

Abstract

A common classifier for unlabeled nodes on undirected graphs uses label propagation from the labeled nodes, a.k.a the harmonic predictor on Gaussian random fields (GRFs). For active learning on GRFs, the commonly used V-optimality criterion queries nodes that reduce the L2 (regression) loss. It has a submodularity property showing that greedy application of it produces a (1 - 1/e) globally optimal solution. However, L2 loss may not characterize the true nature of 0/1 loss in classification problems and thus may not be the best choice for active learning.

We propose a new criterion we call Sigma-optimality, which queries the node that minimizes the sum of the elements in the predictive covariance. Theoretically, we extend submodularity guarantees from V-optimality to Sigma-optimality using properties specific to GRFs. The proofs are interesting because we further show that GRFs have the suppressor-free condition in addition to the conditional independence inherited from Markov random fields.

Sigma-optimality directly optimizes the loss of the surveying problem, which is to determine the proportion of nodes belonging to one class. We test Sigma-optimality on several real-world graphs and show that in addition to the surveying problem, it also outperforms V-optimality and expected error reduction on the classification problem.

Feb 25
Faster Optimization Algorithms? Richard PengCarnegie Mellon UniversityComputer Science Department

Abstract

Convex optimization is a key tool in computer science, with applications ranging from machine learning to operational research. Due to the fast growth of data sizes, the development of faster algorithms is becoming a more pressing question. This talk aims to discuss several emerging approaches for faster and more accurate optimization algorithms using techniques from combinatorial algorithms, numerical analysis and spectral graph theory.

For quadratic, or L_2 minimization, we will present faster algorithms when the underlying matrix has highly uneven dimensions, or when the problem has graph-like structure. Key to these algorithms are methods for sampling the rows of a matrix that preserve the structure of its outer product. We will also discuss the close connections between L_1 regression and quadratic minimization, and describe some ongoing work in image processing using these ideas.

Feb 18
Parallel Markov Chain Monte Carlo for Nonparametric Mixture Models Sinead WilliamsonCarnegie Mellon UniversityMachine Learning Department

Abstract

Nonparametric mixture models based on the Dirichlet process are an elegant alternative to finite models when the number of underlying components is unknown, but inference in such models can be slow. Existing attempts to parallelize inference in such models have relied on introducing approximations, which can lead to inaccuracies in the posterior estimate. In this talk, I will construct auxiliary variable representations for the Dirichlet process and the hierarchical Dirichlet process that facilitate the development of distributed Markov chain Monte Carlo schemes that use the correct equilibrium distribution. Experimental analyses show that this approach allows scalable inference without the deterioration in estimate quality that accompanies existing methods.

This is joint work with Avinava Dubey and Eric Xing

Feb 11
Unsupervised global prediction of interaction networks from multiple image data sources Kriti PuniyaniCarnegie Mellon UniversityLanguage Technologies Institute

Abstract

As large scale methods of measuring gene expression via images have been developed, the question arises: how can we predict gene interaction networks from images? The four major machine learning challenges are (a) how do you deal with multiple images per data source, and (b) multiple data sources, in (c) an unsupervised manner, while analyzing (d) global conditional independencies instead of pairwise interactions.

In this talk, I will describe a two-step solution to this problem. First, we present an algorithm, Gin-IM, for learning interaction networks from images in a single data source. Gin-IM combines multi-instance kernels with recent work in learning sparse undirected graphical models to predict interactions between genes.

Next, we propose NP-MuScL (nonparanormal multi source learning) to estimate a gene interaction network that is consistent with multiple sources of data, having the same underlying relationships between the nodes. NP-MuScL uses the semiparametric Gaussian copula to model the distribution of the different data sources, with the different copulas sharing the same covariance matrix.

We apply our algorithms on Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project. Data from different time steps in Drosophila embryonic development are treated as separate data sources. With spatial gene interactions predicted via Gin-IM, and temporal predictions combined via NP-MuScL, we can finally predict spatiotemporal gene networks from these images.

Parts of this work were presented at ECCV 2012, and will be presented at RECOMB 2013. This is joint work with my advisor Eric P. Xing. No knowledge of biology is needed to follow this talk.

Feb 04
Beyond the Netflix Prize: On Recommender Systems for Ranking Harald SteckNetflix

Abstract

In this talk, I will discuss several interesting data mining / machine learning challenges for making personalized recommendations of TV shows and movies to each user at Netflix, and I will take a deeper dive into two of them. While the presented experiments are based on Netflix data, the challenges and approaches apply also to other domains besides movies.

The objective of our recommender system is to rank movies according to each user's preferences. Users' preferences can be estimated from their feedback data like plays or ratings of movies, among others. While the extreme data sparsity is a well-known problem (each user interacts with only a small fraction of all movies), I will in particular discuss the following two aspects: (1) the feedback data are missing not at random (MNAR), and (2) the distribution of the data is very skewed.

The MNAR nature of the feedback data originates from the fact that users can choose which movies to play or to rate. As a result, the fact which movies a user interacted with carries useful information. From a statistical perspective, however, MNAR data pose interesting challenges--not only for training a recommender system, but also for designing meaningful test/validation procedures on such data.

The second aspect addressed in detail in this talk is the skewed distribution of these data: while there are a few popular movies that receive most of the attention, the majority of movies is in the long tail of the popularity distribution. Recommendations from the long tail are generally considered to be particularly valuable for the user, as they are otherwise difficult to discover. On the other hand, recommendation accuracy tends to decrease towards the long tail due to lack of data.

I will discuss machine learning approaches to tackle both challenges, and present empirical evidence that large improvements can be achieved by tackling these key properties of the feedback data.

Biography

Harald Steck is a data scientist at Netflix, where he develops personalization algorithms and recommender systems. He has over ten years of experience in machine learning, in particular in graphical models and more recently in recommender systems. He has conducted research at various industrial and academic organizations, including Bell Labs, ETH Zurich in Switzerland, MIT AI Lab, as well as Technical University of Munich in Germany, where he obtained a PhD degree in Computer Science in 2001.

Jan 28
Optimizing Recommender Systems as a Submodular Bandits Problem Yisong YueCarnegie Mellon UniversityMachine Learning Department

Abstract

User feedback has become an invaluable source of training data for optimizing recommender systems in a rapidly expanding range of domains, most notably content recommendation (e.g., news, movies, ads). When designing recommender systems that adapt to user feedback, two important challenges arise. First, the system should recommend optimally diversified content that maximizes coverage of the information the user finds interesting (to maximize positive feedback). Second, the system should make appropriate exploratory recommendations in order to learn a reliable model from feedback.

In this talk, I will describe the Linear Submodular Bandits Problem, which is a framework for jointly modeling the utility of a set of recommendations (so as to encourage diversity), as well as the exploration/exploitation trade-off that arises when learning from user feedback. In particular, the utility of a set of recommendations is modeled as a parameterized submodular function, which naturally encodes a notion of diminishing returns that encourages diversity. For this setting, I will also present an online learning algorithm that can efficiently converge to a near-optimal model.

As with any bandit learning problem, the inefficiency (or regret) of a recommendation algorithm is due primarily to the cost of exploration (i.e., making exploratory recommendations due to not knowing the user's preferences a priori). One way to reduce the cost of exploration is by leveraging prior knowledge. Intuitively, most users bear some similarity to "stereotypical users" that can be represented in a low-dimensional, or coarse, feature space. I will show how to construct a coarse-to-fine hierarchy of feature spaces from the preference profiles of existing users, and also how to conduct bandit learning using this feature hierarchy to drastically reduce the amount of exploration required.

I will present a live user study, where these approaches were applied to the setting of personalized news recommendation. Our results demonstrate improved performance against approaches that do not directly model diversification, do not employ exploration, or do not incorporate prior knowledge to reduce the amount of exploration required.

This is joint work with Carlos Guestrin and Sue Ann Hong.

2012 Fall Semester
Nov 19
On Unified View of Nullspace-type Conditions for Sparse and Low-rank Recoveries Fatma Kilinc KarzanCarnegie Mellon UniversityTepper School of Business

Abstract

We discuss a general notion of “sparsity structure” and present a unified framework for the recovery of "sparse-structured" signals from their linear image of reduced dimension possibly corrupted with noise. This unified treatment covers usual sparse and block-sparse recoveries via commonly used $\ell_1$ regularization as well as low-rank matrix reconstruction via nuclear norm minimization. We present null-space type sufficient conditions for the recovery to be precise in the noiseless case, derive error bounds for imperfect recovery (nearly sparse signal, presence of observation noise) and relate to the other well-known conditions (Restricted Isometry Property, Mutual Incoherence) from the literature. Our emphasis is on efficiently verifiable sufficient conditions on the problem parameters (sensing matrix and sparsity structure) for the validity of the associated nullspace properties. While the efficient verifiability of a condition is by no means necessary for the condition to be meaningful and useful, we believe that verifiability has its value and is worthy of being investigated. In particular, verifiability allows us to design new recovery routines with explicit confidence bounds for the recovery error, which can then be optimized over the method parameters leading to recovery procedures with improved statistical properties.

This is joint work with Anatoli Juditsky, Arkadi Nemirovski and Boris Polyak.

Nov 05
What Makes Paris Look Like Paris Carl DoerschCarnegie Mellon UniversityMachine Learning Department

Abstract

Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and influences within and across cities, finding representative elements at different geo-spatial scales, and geographically-informed image retrieval.

This work was presented at SIGGRAPH this past summer, and has been featured in the Wall Street Journal and other media outlets. http://graphics.cs.cmu.edu/projects/whatMakesParis/

Variance Function Estimation in High-dimensions James SharpnackCarnegie Mellon UniversityMachine Learning Department

Abstract

We consider the high-dimensional heteroscedastic regression model, where the mean and the log variance are modeled as a linear combination of input variables. Existing literature on high-dimensional linear regression models has largely ignored non-constant error variances, even though they commonly occur in a variety of applications ranging from biostatistics to finance. In this paper we study a class of non-convex penalized pseudolikelihood estimators for both the mean and variance parameters. We show that the Heteroscedastic Iterative Penalized Pseudolikelihood Optimizer (HIPPO) achieves the oracle property, that is, we prove that the rates of convergence are the same as if the true model was known. We demonstrate numerical properties of the procedure on a simulation study and real world data.

This work was presented at ICML this past summer. http://icml.cc/2012/papers/722.pdf

Oct 29
Weakly Supervised Training of Semantic Parsers Jayant KrishnamurthyCarnegie Mellon UniversityComputer Science Department

Abstract

Semantic parsing is the problem of automatically converting natural language into a computer-understandable formal representation (essentially, a statement in a programming language). The promise of semantic parsing is its generality: many language understanding tasks can be posed as semantic parsing problems, including question answering and information extraction. However, current approaches to semantic parsing suffer from two serious defects: (1) semantic parsers are trained on manually annotated sentences, which are impractical to obtain when working at web-scale and (2) semantic parsers rely on manually constructed knowledge bases, which are challenging to construct and typically incomplete or nonexistent. The second problem is especially problematic in grounded domains, such as robotics, where language may refer to objects in the real-world.

This talk presents two weakly supervised training algorithms for semantic parsers that overcome these two limitations. The first algorithm eliminates the need for annotated sentences; we demonstrate this algorithm by training an accurate semantic parser for Freebase that has the most expressive knowledge representation of any published semantic parser. The second algorithm eliminates the need for a prespecified knowledge representation; we demonstrate this algorithm on a natural language grounding task, identifying the objects in an image referred to by natural language expressions such as "the mug to the left of the monitor." In both cases, reducing the supervision requirements of semantic parsing allows us to tackle problems which are infeasible in the traditional, supervised paradigm.

This is joint work with Tom Mitchell and Thomas Kollar.

Oct 22
Adversarial Evaluation for Models of Natural Language Noah SmithCarnegie Mellon UniversityLanguage Technologies Institute

Abstract

One way to build natural language processing tools is to ask human experts to annotate examples of the desired output for real-world text inputs, then apply supervised learning. Another is to start with text and a model family and apply unsupervised learning. In either case, how do we know whether the human or machine annotations are "correct"? In this talk, I'll give a little bit of background about the research area, and I'll discuss some of the weaknesses of our current evaluation methodology. I'll present a new abstract framework for evaluation. The central idea is to make explicit certain adversarial roles among researchers, so that the different roles in an evaluation are more clearly defined and participants in all roles are offered ways to make measurable contributions to the larger goal. This framework can be instantiated in many ways, simulating some familiar intrinsic and extrinsic evaluations as well as some new evaluations. This talk is entirely based on preliminary ideas (no theoretical or experimental results) and is intended to spark discussion.

Oct 15
Optimal Prediction in Spatio-Temporal Systems - Nonparametric Methods for Forecasting, Pattern Discovery, and Dimension Reduction Georg M GoergCarnegie Mellon UniversityDepartment of Statistics

Abstract

Many important scientific and data-driven problems involve quantities which vary over both space and time. Examples include functional magnetic resonance imaging (fMRI), climate data, or experimental studies in physics and chemistry. Principal goals of many methods in statistics, machine learning, and signal processing are to use this data and i) extract informative structures and remove noisy, uninformative parts; ii) understand and reconstruct underlying spatio-temporal dynamics that govern these systems; and iii) forecast the data, i.e. describe the system in the future.

In this talk I present generally applicable, statistical methods that address all three problems in a unifying approach. I introduce two new techniques for optimal nonparametric forecasting of spatio-temporal data: hard and mixed LICORS (Light Cone Reconstruction of States). Hard LICORS is a consistent estimator of the predictive state space of continuous-valued data. Mixed LICORS builds on a new, fully probabilistic model of light cones and predictive states mappings, and is an EM-like version of hard LICORS. These estimators can then be used to estimate local statistical complexity (LSC), a fully automatic technique for pattern discovery in dynamical systems. Simulations and applications to fMRI data demonstrate that the proposed methods work well and give useful results in very general scientific settings.

This is joint work with Cosma Shalizi, Larry Wasserman, Christopher Genovese (CMU Stats), and Elisha Merriam (NYU, Center for Neural Science).

Oct 08
Time-Dependent Model Parameters Dani YogatamaCarnegie UniversityLanguage Technologies Institute

Abstract

Timestamped data present a challenge to machine learning for predictions in the future: ignoring timestamps and assuming data are i.i.d. is scalable but risks distracting a model with irrelevant ``ancient history,'' while using only the most recent portion of the data risks overfitting to current trends and missing important time-insensitive effects. We seek a general approach to learning model parameters that consider the variation in how different effects change over time.

We construct two novel prior distributions that allows parameters of probabilistic models to vary over time. Our priors encourage correlation between parameters at successive timesteps. We show how to do learning and inference under these priors. We test the approaches on several real-world datasets, demonstrating significant improvements over time-series-ignorant priors. Moreover, inspecting feature coefficients in the model allows us to identify trends and changes over time.

Oct 01
GraphChi: Big Data - small machine Aapo KyrolaCarnegie UniversityComputer Science Department

Abstract

Current systems for graph computation require a distributed computing cluster to handle very large real-world problems, such as analysis on social networks or the web graph. While distributed computation resources have become more accessible, developing distributed graph algorithms still remains challenging, especially to non-experts.

In this work, we present GraphChi, a disk-based system for computing efficiently on graphs with billions of edges. By using a well-known method to break large graphs into small parts, and a novel parallel sliding windows method, GraphChi is able to execute several advanced data mining, graph mining, and machine learning algorithms on very large graphs, using just a single consumer-level computer. We further extend GraphChi to support graphs that evolve over time, and demonstrate that on a single computer, GraphChi can process over a hundred of thousands of graph updates per second, while simultaneously performing computation. We show by experiments and theoretical analysis, that GraphChi performs well on SSDs and srprisingly also on rotational hard drives.

By repeating experiments reported for existing distributed systems, we show that with only fraction of the resources, GraphChi can solve the same problems in very reasonable time. Our work brings large-scale graph computation available to anyone with a modern PC.

This work has been accepted to OSDI '12, and will be presented in Hollywood on October 8, 2012.

GraphChi: Big Data - small machine: http://graphchi.org

Sep 24
Consistency of co-blockmodeling for exchangeable graph data Dave ChoiCarnegie UniversityHeinz College

Abstract

We analyze the problem of partitioning a 0-1 array or bipartite graph into subgroups (also known as co-clustering), under a relatively mild assumption that the data is generate by a general nonparametric process. We show that detection of co-clusters in the data implies with high probability the existence of co-clusters of similar proportion and connectivity in the generative process. Our main application is the analysis of a crude model for networks -- the stochastic co-blockmodel -- when the data is not assumed to be generated (even approximately) by a blockmodel, but rather by an unknown exchangeable process. Our result suggests that the stochastic co-blockmodel and other community detection algorithms may be robust to model misspecification.

Sep 17
Optimal kernel choice for kernel hypothesis testing Arthur GrettonGatsby Computational Neuroscience UnitUniversity College London

Abstract

We consider two nonparametric hypothesis testing problems: (1) Given samples from distributions p and q, a two-sample test determines whether to reject the null hypothesis p=q; and (2) Given a joint distribution p_xy over random variables x and y, an independence test determines whether to reject the null hypothesis of independence, p_xy = p_x p_y. In testing whether two distributions are identical, or whether two random variables are independent, we require a test statistic which is a measure of distance between probability distributions. One choice of test statistic is the maximum mean discrepancy (MMD), a distance between embeddings of the probability distributions in a reproducing kernel Hilbert space. The kernel used in obtaining these embeddings is critical in ensuring the test has high power, and correctly distinguishes unlike distributions with high probability.

In this talk, I will provide a tutorial overview of kernel distances on probabilities, and show how these may be used in two-sample and independence testing. I will then describe a strategy for optimal kernel choice, and compare it with earlier heuristics (including other multiple kernel learning approaches).

Joint work with: Bharath Sriperumbudur, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji Fukumizu

Sep 10
NSH 3305 Attribute-efficient learning and weight-degree tradeoffs for polynomial threshold functions Li-Yang TanColumbia UniversityComputer Science Department

Abstract

We study the challenging problem of learning decision lists attribute-efficiently, giving both positive and negative results.

Our main positive result is a new tradeoff between the running time and mistake bound for learning length-k decision lists over n Boolean variables. When the allowed running time is relatively high, our new mistake bound improves significantly on the mistake bound of the best previous algorithm of Klivans and Servedio.

Our main negative result is a new lower bound on the weight of any degree-d polynomial threshold function (PTF) that computes a particular decision list over k variables. This lower bound establishes strong limitations on the effectiveness of the Klivans and Servedio approach and suggests that it may be difficult to improve on our positive result. The main tool used in our lower bound is a new variant of Markov's classical inequality which may be of independent interest; it provides a bound on the derivative of a univariate polynomial in terms of both its degree and the size of its coefficients.

2012 Spring Semester
May 14
NSH 3305 Tera-scale deep learning Quoc V. LeStanford UniversityComputer Science Department

Abstract

Deep learning and unsupervised feature learning offer the potential to transform many domains such as vision, speech, and NLP. However, these methods have been fundamentally limited by our computational abilities, and typically applied to small-sized problems. In this talk, I describe the key ideas that enabled scaling deep learning algorithms to train a very large model on a cluster of 16,000 CPU cores (2000 machines). This network has 1.15 billion parameters, which is more than 100x larger than the next largest network reported in the literature.

Such network, when applied at the huge scale, is able to learn abstract concepts in a much more general manner than previously demonstrated. Specifically, we find that by training on 10 million unlabeled images, the network produces features that are very selective for high-level concepts such as human faces and cats. Using these features, we also obtain significant leaps in recognition performance on several large-scale computer vision tasks.

May 07
NSH 3305 Degrees of freedom in statistical machine learning Ryan TibshiraniCarnegie Mellon UniversityDepartment of Statistics

Abstract

This talk motivates the study of degrees of freedom in statistical machine learning problems. Essentially, the degrees of freedom of a fitting procedure is its effective number of parameters. Though this is a vague concept, it has a precise definition for a wide class of problems. I will discuss what is known for some key problems in this class, and how some other important problems are not particularly well-understood.

Apr 30
NSH 3305 Active Learning, Drifting Distributions, and Convex Surrogate Losses Liu YangCarnegie Mellon UniversityMachine Learning Department

Abstract

We study the problem of active learning in a stream-based setting, allowing the distribution of the examples to change over time. We prove upper bounds on the number of prediction mistakes and number of label requests for established disagreement-based active learning algorithms, both in the realizable case and under Tsybakov noise. We further prove minimax lower bounds for this problem.

It is often computationally hard to run these methods with the 0-1 loss. Passive learning often resolves this problem by replacing the 0-1 loss with a convex relaxation, called a surrogate loss. We examine the extent to which this trick can also be useful in active learning. We start with a negative result for active learning with convex losses, where we prove that even under bounded noise constraints, the minimax rates for optimizing a convex loss with proper active learning are often no better than for passive learning. Then we explore a strategy that makes use of a given surrogate loss function in a different way, so that although the algorithm does not necessarily optimize the surrogate loss, it does optimize the 0-1 loss under certain conditions. We further present label complexity results for this method, showing that it sometimes improves over the analogous passive learning method.

Apr 16
NSH 3305 Efficient Vertex-Proximity Query Processing on Large Time-Evolving Graphs Leman AkogluCarnegie Mellon UniversityComputer Science Department

Abstract

Finding the k nearest neighbors (k-NNs) of a given vertex in a graph has many applications such as link prediction, recommendation systems and keyword search. One robust measure of vertex-proximity in graphs is the Personalized Page Rank (PPR) score based on random walk with restarts. Since PPR scores have long-range correlations, computing them accurately and efficiently is challenging when the graph is too large to fit in memory, especially when it also changes over time. In this work, we propose ClusterRank, an efficient algorithm to answer PPR-based k-NN queries in large time-evolving graphs. ClusterRank represents a given graph as a collection of dense vertex-clusters with their inter connections. Each vertex-cluster maintains certain information related to internal random walks and updates this information as the graph changes. At query time, ClusterRank combines this information from a small set of relevant clusters and computes ppr scores efficiently. While ClusterRank can perform exact computations, we also propose several heuristics in order to reduce its query response time while only sacrifi cing a little on the accuracy. We validate the effectiveness of our method on several synthetic and real-world graphs from diverse domains.

Biography

Leman Akoglu is a Ph.D. candidate in the Computer Science Department at Carnegie Mellon University, advised by Prof. Christos Faloutsos. She received her B.S. at Bilkent University in 2007. She won 2 best paper awards and published 15 refereed articles in major data mining venues. She is one of the inventors of 3 U.S. patents (pending), filed by IBM T. J. Watson Research Labs. Her research focuses on large-scale data analytics, with an emphasis on anomaly and event detection in large, time-varying graphs using scalable algorithms and tools.

Apr 09
NSH 3305 Sample Complexity of CRF Parameter Learning Joseph BradleyCarnegie Mellon UniversityMachine Learning Department

Abstract

We present the first PAC bounds for learning parameters of Conditional Random Fields (Lafferty et al., 2001) with general structures over discrete and real-valued variables. Our bounds apply to composite likelihood (Lindsay, 1988), which generalizes maximum likelihood and pseudolikelihood (Besag, 1975). Moreover, we show that the only existing algorithm with a PAC bound for learning high-treewidth discrete models (Abbeel, 2006) can be viewed as a computationally inefficient method for computing pseudolikelihood. We present an extensive empirical study of the statistical efficiency of these estimators, as predicted by our bounds. Finally, we use our bounds to show how to construct computationally and statistically efficient composite likelihood estimators.

(Work appearing in AISTATS 2012)

Apr 02
NSH 3305 Machine Learning and Computer-Assisted Creativity Burr SettlesCarnegie Mellon UniversityMachine Learning Department

Abstract

How can machine learning help you write a song? In this talk, I will discuss two preliminary projects that have grown out of February Album Writing Month ( http://fawm.org/), an online community for thousands of international musicians that I organize. The goal of each participant is to write 14 new songs in the month of February. First, I will discuss simple generative language models which were used to create a suite of online "computational creativity tools" called The Muse ( http://muse.fawm.org) . Despite their simplicity, these tools were successful at helping hundreds of participants write new songs. Second, I will present work in modeling user behavior that begins to explain and characterize what social interactions are most associated with successful outcomes (number of songs written, attaining the 14-song goal, donating money, returning the following year, etc.). I will conclude by proposing a few open directions for how machine learning can support individual and group creativity.

Mar 26
NSH 3305 Choosing your flavour of Neurosemantics Brian MurphyCarnegie Mellon UniversityMachine Learning Department

Abstract

Mitchell (2008) demonstrated that brain states can be understood in terms of a componential semantics - that is that certain conceptual dimensions or other semantic properties could be used to interpret, and to predict, the activity of the brain when it is processing the meaning of a word. Meaning can be characterized in many computational ways, including hand-crafted ontologies, electronic thesauri, and corpus-based distributional models like HAL or LSA. Broad-coverage "people-based" distributional models are also emerging through crowd-sourcing of word associations, properties and similarity judgements. Here I'll examine corpus-based models to see which kinds of co-occurrence data are most informative for understanding patterns of activity in the brain, using regularized regressions. I'll also compare how labour- and computationally-intensive they are relative to human benchmarks, and speculate on how a combination of automatically derived and hand-tailored resources might provide a more comprehensive description of the human lexicon.

Mar 05
NSH 3305 Synaptic Pruning: An Algorithmic Perspective Saket NavlakhaCarnegie Mellon UniversityMachine Learning Department

Abstract

Many biological processes can be viewed as algorithms that nature has designed to solve real-world computational problems. In this talk, we will discuss our preliminary work on connecting "synaptic pruning" (the neurodevelopmental process in which synapses in the brain exuberantly form and then retract) to problems in distributed network design. We will present numerical results showing the benefit of pruning-based algorithms compared to other strategies for creating robust, efficient, and adaptive networks, and we will talk about our efforts to experimentally quantify the rate of synapse elimination in the mouse cortex.

This is joint work with Yoshio Takashima, Alison Barth, and Ziv Bar-Joseph.

Feb 27
NSH 3305 New Algorithms for Contextual Bandits Lev ReyzinSchool of Computer ScienceGeorgia Institute of Technology

Abstract

This talk will focus on the contextual bandit problem, which essentially captures many situations from medical testing to computational advertising. In this setting, a learner only discovers about the actions it takes, creating a need for balancing exploiting good strategies and exploring new ones. I will discuss some recent results on the contextual bandit problem, including the first high-probability optimal algorithm, a reduction from bandits to cost sensitive learning, as well as an interesting generalization of the setting. These advances point to the possibility of launching better and more effective algorithms in practice.

Feb 20
NSH 3305 Differential Privacy in a Reproducing Kernel Hilbert Space Rob HallCarnegie Mellon UniversityMachine Learning Department

Abstract

Differential Privacy is a criteria used to judge whether a randomized algorithm operating over a database of individuals may be deemed to preserve privacy. In this work we apply the notion of Differential Privacy to reproducing kernel Hilbert spaces of functions. As in the finite dimensional Differential Privacy literature, we achieve privacy via noise addition where the variance is calibrated to the "sensitivity" of the output. In our setting the noise in question is the sample path of a Gaussian process, and the sensitivity is measured in the RKHS norm rather than the euclidean norm. We give examples of private versions of kernel density estimators and support vector machines.

This talk will be self contained in that it will not assume the prior knowledge about differential privacy, stochastic processes etc.

Feb 13
NSH 3305 Nonparametric risk bounds for time series prediction Daniel McDonaldCarnegie Mellon UniversityDepartment of Statistics

Abstract

We derive generalization error bounds — bounds on the expected inaccuracy of the predictions — for traditional time series forecasting models. These bounds allow forecasters to select among competing models and to guarantee that with high probability, their chosen model will perform well without making strong assumptions about the data generating process or appealing to asymptotic theory. Extending results from statistical learning theory, we demonstrate how these techniques can benefit time-series forecasters interested in choosing models which behave well under uncertainty and misspecification.

Feb 06
NSH 3305 Hierarchical Bayesian modeling: efficient inference and applications Chong WangPrinceton UniversityComputer Science Department

Abstract

Hierarchical Bayesian models have become a popular tool for analyzing large-scale real-world data, such as text and images. Through these models, people can build useful tools for latent structure discovery, browsing and recommendations. In this talk, I will present several of my work in the area of hierarchical Bayesian modeling, with an emphasis on topic modeling, Bayesian nonparametrics and approximate posterior inference. Specifically, I will first talk about an online variational inference framework that can scale up to millions of articles on a single machine and also does model selection using Bayesian nonparametric models. Then I will describe a novel recommendation model on scientific articles that provides an interpretable latent structure for both users and items. This is important for end-user interactions, however usually difficult to obtain and ignored in the literature. A demo is at http://www.cs.princeton.edu/~chongw/citeulike/ .

Jan 30
Estimation of Network Granger Causal models with Inherent Group Structures Ali ShojaieUniversity of WashingtonDepartment of Biostatistics

Abstract

The problem of estimating high-dimensional network structures arises naturally in the analyses of many physical, biological and socio-economic systems. Examples include stock price fluctuations in financial markets and gene regulatory networks representing effects of regulators (transcription factors) on regulated genes in Genetics. In many of these applications, the variables have inherent grouping structures, that when incorporated, can result in improved estimation and prediction. We aim to learn the structure of the network over time employing the framework of Granger causal models, under the assumptions of sparsity of its edges and inherent grouping structure among its nodes. I introduce a truncated penalty variant of group lasso to discover the Granger causal interactions among the nodes of the network. Asymptotic results on the consistency of the new estimation procedure are developed. The performance of the proposed methodology is assessed through an extensive set of simulation studies and comparisons with existing techniques. Finally, various extensions of the framework to more complex Granger causal structures are discussed.

Joint work with Sumanta Basu and George Michailidis (University of Michigan)

Jan 23
Support Distribution Machines Barnabas PoczosCarnegie Mellon UniversityAuton Lab

Abstract

Most machine learning algorithms, such as classification or regression, treat the individual data point as the object of interest. Here we consider extending machine learning algorithms to operate on groups of data points. We suggest treating a group of data points as a set of i.i.d. samples from an underlying feature distribution for the group. Our approach is to generalize kernel machines from vectorial inputs to i.i.d. sample sets of vectors. For this purpose, we use a nonparametric estimator that can consistently estimate the inner product and certain kernel functions of two distributions. The projection of the estimated Gram matrix to the cone of semi-definite matrices enables us to employ the kernel trick, and hence use kernel machines for classification, regression, anomaly detection, and low-dimensional embedding in the space of distributions. Our numerical results demonstrate that in many cases this approach can outperform state-of-the-art competitors on both simulated and challenging real-world datasets.

2011 Fall Semester
Dec 08
A Reliable Effective Terascale Linear Learning System John LangfordYahoo Research

Abstract

How do you learn a linear predictor on a dataset with 2 trillion nonzero features in a reasonable amount of time? Allsingle-machine algorithms fail, because the time to even stream the data through a network interface is too great. I will discuss an algorithm for doing this based on a combination of online learning, LBFGS, parallel learning, and a new communication infrastructure---Hadoop compatible Allreduce that we created. Deployed on 1000 nodes, we can learn faster than any single machine _ever_ will be able to learn a linear predictor on this hardware, the first such learning algorithm for which this claim can be made. The code behind this is open source in Vowpal Wabbit (http://hunch.net/~vw/).

This is joint work with Alekh Agarwal, Olivier Chapelle and Miroslav Dudik A full draft is here: http://arxiv.org/abs/1110.4198

Nov 28
Sequential Decision Making Under Uncertainty and Applications to Automated Teaching Emma BrunskillComputer Science DepartmentCarnegie Mellon University

Abstract

I will discuss some work on sequential decision making under uncertainty in large, partially observable domains. During the talk I will demonstrate that by leveraging structural properties in the dynamics model we can scale to domains orders of magnitude larger than generic approaches. The talk will particularly focus on the strengths and technical challenges that arise in designing automated, adaptive instructional sequences for computerized tutoring systems.

Nov 21
A Bayesian Hierarchical Model for Measuring Fielding in Baseball Shane T. JensenDepartment of Statistics, The Wharton SchoolUniversity of Pennsylvania

Abstract

Fielding ability remains a difficult quantity to estimate in baseball. I present a sophisticated hierarchical model that uses current ball-in-play data to evaluate individual fielders. I will discuss continuing efforts to extend these fielding models to examine the evolution of fielding ability over multiple seasons. Many challenges in this area remain: our modeling efforts are constrained by the aspects of fielding measured in the current data. These limitations will be discussed with a look towards the potential availability of much higher resolution data in the near future.

Nov 14
Computational Rationalization: The Inverse Equilibrium Problem Kevin WaughComputer Science DepartmentCarnegie Mellon University

Abstract

Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the single-agent decision-theoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multi-agent domains. Here, unlike single-agent settings, a player cannot myopically maximize its reward --- it must speculate on how the other agents may act to influence the game's outcome. Employing the game-theoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains.

Nov 07
Spectral Approaches to Learning Dynamical Systems Byron BootsMachine Learning DepartmentCarnegie Mellon University

Abstract

Many interesting applications require solving the following problem: by watching an incoming stream of sensor data, hypothesize a dynamical system model which explains that data. The general problem of learning a dynamical system from a sensor data stream is difficult: to discover the right latent state representation and model parameters, one must solve difficult temporal and structural credit assignment problems, often leading to a search space with a host of (bad) local optima. In this talk, I will discuss how to overcome these problems by pairing an expressive class of models called Predictive State Representations with statistically consistent spectral learning algorithms. I will show that this framework is very general, easy to implement, computationally efficient, and can be used to unify and solve a number of different learning problems that are typically addressed in isolation.

Oct 31
Identifiability of Priors from Bounded Sample Sizes with Applications to Transfer Learning Liu YangMachine Learning DepartmentCarnegie Mellon University

Abstract

We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. The key insight driving our approach is that the distribution of the target concepts is identifiable from the joint distribution over a number of random labeled data points equal the Vapnik-Chervonenkis dimension of the concept space. This is not necessarily the case for the joint distribution over any smaller number of points.

This work has particularly interesting implications when applied to active learning methods. In particular, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.

Oct 24
Modeling images with transformed IBPs Sinead WilliamsonMachine Learning DepartmentCarnegie Mellon University

Abstract

Latent feature models are an appropriate choice for image modeling, since images generally contain multiple objects or features. However, many latent feature models either do not account for the fact that objects can appear at different locations in different images or they require pre-segmentation of images and cluster the resulting segments. While the recently-proposed transformed Indian buffet process (tIBP) provides a method for modeling transformation-invariant features in simple binary images without the need for pre-segmentation, it cannot be applied to real images because of both computational constraints and its modeling assumptions. In this talk, I will show how the tIBP can be combined with an appropriate likelihood to create a model applicable to real images, and describe a novel Metropolis-Hastings inference algorithm that significantly improves the scalability of the tIBP.

This is joint work with Ke Zhai, Yuening Hu and Jordan Boyd-Graber at the University of Maryland.

Oct 17
Solving Unsolvable Games Martin ZinkevichYahoo! Research

Abstract

The Lemonade Stand Game Tournament is a multiagent competition focused on understanding the issues faced when playing in a multi-agent environment. In contrast to Robocup, the annual computer poker competitions, and the TAC Competitions, the computational aspects of the game are intentionally kept to a minimum, forcing players to focus on the far more enigmatic task of understanding their opponents. While Nash equilibria are easily computed, the game is unsolvable, meaning that if everyone plays a different equilibrium, there is no guarantee that there joint behavior is meaningful.

In this talk, I will discuss how, in the past three years of the competition, new and interesting strategies have arisen which have no single agent equivalent: strategies that attempt not only to understand and adapt to the world, but strategies that also force the world to adapt to them. Although the competition is about maximizing utility, teams have designed radically new decision methodologies to achieve this. It is a known failing of Nash equilibria that it is a static concept and does not incorporate learning. The ultimate goal of this competition is to create a "group thought experiment" where not just an understanding of the static states of multi-agent systems can be developed, but also new empirical laws governing their dynamics can be explored, to be then tested and analyzed in a wider setting.

I will also discuss the upcoming 2012 Lemonade Stand Game Tournament, with a deadline in early summer 2012. More details can be found at http://martin.zinkevich.org/lemonade.

Biography

Martin Zinkevich is a Senior Research Scientist at Yahoo! Research, where he has worked since 2007. His primary interests are anti-abuse, large scale machine learning, and multi-agent learning. He completed his Ph.D. with Avrim Blum at CMU in 2004 with a thesis entitled, "Theoretical Guarantees in Multiagent Settings". In between, he worked as a postdoc with Amy Greenwald at Brown University, looking at multi-agent reinforcement learning, and as a postdoc with Michael Bowling at the University of Alberta, building (among other things) poker programs that played at professional level, a project which culminated in a exhibition match in Vegas at the World Series of Poker and a Wired article. At his day job, he works on blocking spam e-mails and spam comments under news articles.

Oct 10
Many Machine Learning Approaches to One Problem: News Feed at Facebook Max GubinFacebook Inc.

Abstract

Facebook is the most popular social network in the world and millions check news from their friends on its home page every day. There is a machine learning system that creates personalized news stream for every user on every load of the page. Filtering and ranking stories in the news feed are unique Facebook problems but they are similar to many other machine learning challenges. The large scale of growing social network, changes in interface and user behavior do not give us an opportunity to find the best solution once and forever, this is an endless competition of ideas and algorithms. The presentation contains some details about the problem, architecture of the system, our ideas, and algorithms that we use.

Oct 3
Learning in Metric Spaces: Classification, Regression, Anomaly Detection Aryeh (Leonid) KontorovichComputer Science DepartmentBen-Gurion University

Abstract

Using Lipschitz extensions for classification in metric spaces was apparently first proposed by von Luxburg and Bousquet (2004), who also noted that algorithmically, the solution can be realized as a nearest-neighbor search. In a COLT 2010 paper, we showed how to exploit the intrinsic geometry of the metric space to construct highly efficient classifiers and to derive data-dependent generalization bounds. We employed the doubling dimension on two fronts: information-theoretically, to control the fat-shattering dimension of Lipschitz functions (which yields error estimates), and algorithmically, to perform approximate nearest-neighbor search exponentially faster than the exact one. Since then, we have extended this technique to regression and anomaly detection. The talk, intended for a broad audience, will present an overview of our recent results, obtained in collaboration with: Daniel Berend, Lee-Ad Gottlieb, Danny Hendler, Eitan Menahem, Robert Krauthgamer.

Biography

Aryeh (Leonid) Kontorovich received his undergraduate degree in mathematics with a certificate in applied mathematics from Princeton University in 2001. His M.Sc. and Ph.D. are from Carnegie Mellon University, where he graduated in 2007. After a postdoctoral fellowship at the Weizmann Institute of Science, he joined the Computer Science department at Ben-Gurion University of the Negev in 2009 as an assistant professor; this is his current position. His research interests are mainly in machine learning, with a focus on probability, statistics, and automata theory.

Sep 26
Graph-Based Weakly-Supervised Methods for Information Extraction & Integration Partha Pratim TalukdarMachine Learning DepartmentCarnegie Mellon University

Abstract

The variety and complexity of potentially-related data resources available for querying --- webpages, databases, data warehouses --- has been growing ever more rapidly. There is a growing need to pose integrative queries across multiple such sources, exploiting foreign keys and other means of interlinking data to merge information from diverse sources. This has traditionally been the focus of research within Information Extraction (IE) and Information Integration (II) communities, with IE focusing on converting unstructured sources into structured sources, and II focusing on providing a unified view of diverse structured data sources. However, most of the current IE and II methods, which can potentially be applied to the problem of integration across sources, require large amounts of human supervision, often in the form of annotated data. This need for extensive supervision makes existing methods expensive to deploy and difficult to maintain. Faced with this challenge, in this talk, I shall present an overview of my research into graph-based weakly-supervised methods for IE and II.

Joint work with Koby Crammer, Sudipto Guha, Zack Ives, Marie Jacob, Marius Pasca, Fernando Pereira, Joseph Reisinger

Biography

Partha Pratim Talukdar is a Postdoctoral Fellow in the Machine Learning Department at CMU, working with Tom Mitchell. He received his PhD (2010) in CIS from the University of Pennsylvania, working under the supervisions of Fernando Pereira, Mark Liberman, and Zack Ives. He is broadly interested in Machine Learning, Natural Language Processing, Data Integration, and Cognitive Science, with particular interest in large-scale learning and inference over graphs. Partha has worked at a variety of industrial research labs, including HP Labs, Google Research, and most recently Microsoft Research where he spent a year before coming to Carnegie Mellon.

Sep 19
Distributed Cosegmentation via Submodular Optimization on Anisotropic Diffusion Gunhee KimComputer Science DepartmentCarnegie Mellon University

Abstract

The saliency of regions or objects in an image can be significantly boosted if they recur in multiple images. Leveraging this idea, cosegmentation jointly segments common regions from multiple images. In this paper, we propose CoSand, a distributed cosegmentation approach for a highly variable large-scale image collection. The segmentation task is modeled by temperature maximization on anisotropic heat diffusion, of which the temperature maximization with finite K heat sources corresponds to a Kway segmentation that maximizes the segmentation confidence of every pixel in an image. We show that our method takes advantage of a strong theoretic property in that the temperature under linear anisotropic diffusion is a submodular function; therefore, a greedy algorithm guarantees at least a constant factor approximation to the optimal solution for temperature maximization. Our theoretic result is successfully applied to scalable cosegmentation as well as diversity ranking and single-image segmentation. We evaluate CoSand on MSRC and ImageNet datasets, and show its competence both in competitive performance over previous work, and in much superior scalability.

"Presented in partial fulfillment of the CSD speaking skill requirement".

Sep 12
Noise Thresholds for Spectral Clustering Min XuMachine Learning DepartmentCarnegie Mellon University

Abstract

Although spectral clustering has enjoyed considerable empirical success in machine learning, its theoretical properties are not yet fully developed. We analyze the performance of a spectral algorithm for hierarchical clustering and show that on a class of hierarchically structured similarity matrices, this algorithm can tolerate noise that grows with the number of data points while still perfectly recovering the hierarchical clusters with high probability. We additionally improve upon previous results for k-way spectral clustering to derive conditions under which spectral clustering makes no mistakes. Further, using minimax analysis, we derive tight upper and lower bounds for the clustering problem and compare the performance of spectral clustering to these information theoretic limits. We also present experiments on simulated and real world data illustrating the strength of our results.

Joint work with Sivaraman Balakrishnan, Akshay Krishnamurthy, and Aarti Singh

Biography

Min is the opposite of max.

2011 Spring Semester
Apr 25
NSH 3305Adapting to Non-stationarity with Growing Expert Ensembles Cosma ShaliziStatistics DepartmentCarnegie Mellon University

Abstract

When dealing with time series with complex and uncertain non-stationarities, low retrospective regret on individual realizations is in general a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or "experts." However, existing methods assume that this set of experts whose forecasts are to be combined is given at the start and fixed over time, and such assumptions are not generally plausible when dealing with genuinely historical or evolutionary systems. I show how to modify the "fixed shares" algorithm for tracking the best expert to handle a steadily growing set of experts, in which new experts are fitted to new data as they become available, and obtain regret bounds for the growing ensemble. Joint work with Kristina Klinkner, Abigail Jacobs, and Aaron Clauset.

Apr 18
NSH 3305Apolo: Making Sense of Large Network Data by Combining Rich User Interaction and Machine LearningPolo ChauMachine Learning DepartmentCarnegie Mellon University

Abstract

Extracting useful knowledge from large network datasets has become a fundamental challenge in many domains, from scientific literature to social networks and the web. We introduce Apolo, a system that uses a mixed-initiative approach --- combining visualization, rich user interaction and machine learning --- to guide the user to incrementally and interactively explore large network data and make sense of it. Apolo engages the user in bottom-up sensemaking to gradually build up an understanding over time by starting small, rather than starting big and drilling down. Apolo also helps users find relevant information by specifying exemplars, and then using a machine learning method called Belief Propagation to infer which other nodes may be of interest. We evaluated Apolo with twelve participants in a between-subjects study, with the task being to find relevant new papers to update an existing survey paper. Using expert judges, participants using Apolo found significantly more relevant papers. Subjective feedback of Apolo was also very positive.

Apr 11
NSH 3305Inferring Cultural Phenomena in Large-Scale Social Media DataBrendan O'ConnorMachine Learning DepartmentCarnegie Mellon University

Abstract

The rise of social media has yielded tremendous amounts of behavioral and textual data recording people's attitudes and interests in various social contexts. I will present several projects, employing techniques from simple statistics to LDA-style graphical models, that infer cultural phenomena from these data:
(1) Relating opinion polls to Twitter sentiment analysis
(2) Geographic linguistic communities on Twitter
(3) Cultural interest groupings in the Facebook "Like" graph

Apr 5
NSH 3305The group-first approach for predicting customer churn in mobile networksElad Yom-TovYahoo Research

Abstract

Many activities traditionally regarded as based on individual choice are in fact strongly influenced by the social connections of each customer. In my talk I will present a novel method for analyzing social connections called the Group-First approach. This method exploits the structure of customer interactions to identify social leaders, and through it predict their behavior, as well as that of their peers. I will demonstrate the applicability of this approach to prediction of customer churn in cellular networks. I will present our results from several carriers, which confirm the unique advantages of our approach. Finally, I will discuss the parallel architecture which allows us to process the large volumes of data required for this analysis.

Biography

Elad Yom-Tov is a Senior Research Scientist at Yahoo Research. Before joining Yahoo in 2010, he was with the Machine Learning group at IBM Research Haifa Lab and Rafael. Dr. Yom-Tov received his B.Sc from Tel-Aviv University and his M.Sc and Ph.D. from the Technion – Israel Institute of Technology. Dr. Yom-Tov has co-authored two books and over 40 publications in top international conferences and journals, and filed over 30 patents (6 of which have been granted so far). His primary research interests are in large-scale Machine Learning, Information Retrieval, and in the past few years, social analysis.

Apr 4
NSH 3305Learning Relevance in a Large-scale Social NetworkRajat RainaFacebook

Abstract

Every month, users spend 700 billion minutes on Facebook. Surfacing personalized relevant content to each user at this scale, and for each of the billions of page views, presents interesting machine learning challenges. As a sample application for matching content to user interests, I will describe a method for improving the relevance of online advertising. The method is based on a user interest model that is trained on past clickthrough data. I will also talk briefly about our work on a recommendation system to suggest new friends. This system is responsible for 40% of all new friend connections made on Facebook.

Mar 28
NSH 3305The Label Complexity of Active LearningSteve HannekeDepartment of StatisticsCarnegie Mellon University

Abstract

This talk describes recent progress on generally characterizing the number of label requests sufficient for active learning to achieve a given accuracy (i.e., the label complexity), in both noisy and noise-free settings. Specifically, we begin by discussing a disagreement-based approach to the design and analysis of active learning algorithms, which has recently gained popularity in the literature. We find that the label complexities achieved by algorithms of this type are typically well-characterized by a simple quantity known as the disagreement coefficient. We then proceed to discuss a newer approach to the design of active learning algorithms, based on shatterable sets, and characterize the label complexities achievable by these methods in terms of a quantity analogous to the disagreement coefficient. In particular, these label complexities are often significantly better than those achievable by disagreement-based methods.

Mar 21
NSH 3305Bayesian inference for identifying behavioural rules in animal swarmsRichard MannMatematiska InstitutionenUppsala Universitet

Abstract

Models of interacting Self-Propelled Particles (SPPs) have proven adept at reproducing realistic-looking simulations of animal swarms and form the basis for our understanding of how collective behaviour emerges from individual interactions. With the continued progression of animal tracking technology it is now possible to consider inferring the optimal interaction model based on empirical data of individual movements within co-moving groups.

Group simulations often result in similar patterns of collective behaviour despite including different fine-scale interactions rules. The emergence of similar collective patterns in simulations of animal groups points to a restricted set of "universal" classes for these patterns. Universality presents a challenge to inferring such interactions from macroscopic group dynamics since these can be consistent with many underlying interaction models. As such, fine-scale movements of the animals must be considered if one is to understand exactly how individuals interact.

I will present a study of using Bayesian model selection on simulated data of animal swarms to distinguish between competing interaction models and demonstrate the limitations posed by accepted standards of experimental design.

Biography

Richard Mann is a postdoc in the Centre for Interdisciplinary Mathematics at Uppsala University, Sweden. He works on applying methods of inference and machine learning to the analysis of data in animal behaviour experiments. He completed his PhD at the University of Oxford, UK in 2010.

Mar 14
NSH 3305The Search For StructureLarry WassermanDepartment of StatisticsCarnegie Mellon University

Abstract

I will discuss two projects, both of which are concerned with finding structure in data. The first is "Graph-Valued Regression" where we estimate an undirected graph for a random vector Y, as a function of a second random vectors X. (Joint work with Han Liu, Xi Chen, and John Lafferty).

The second is the problem of "Minimax Estimation of Manifolds From Noisy Data, in the Hausdorff Distance". (Joint work with Chris Genovese, Marco Perone-Pacifico and Isabela Verdinelli).

Feb 28
NSH 3305Exploiting Sparse Structure by Spectral Connectivity Analysis Ann LeeDepartment of StatisticsCarnegie Mellon University

Abstract

For naturally occurring data, the dimension of the given input space is often very large while the data themselves have a low intrinsic dimensionality. Spectral kernel methods are non-linear techniques for transforming data into a coordinate system that efficiently reveals the underlying structure -- in particular, the "connectivity" -- of the data. In this talk, I will focus on one particular technique -- diffusion maps -- but the analysis can be used for other spectral methods as well. I will give examples of various applications of the method in high-dimensional inference. I will also present a new extension of the diffusion framework to comparing distributions in high-dimensional spaces with an application to content-based image retrieval. (Part of this work is joint with R.R. Coifman, S. Lafon, C. Schafer and L. Wasserman)

Feb 21
NSH 3305In-Context: Enabling Environmental Awareness on the Mobile Device Robert FisherMachine Learning DepartmentCarnegie Mellon University

Abstract

As computing hardware becomes more portable and more instrumented with sensors, algorithms to parse this data become ever more important. We introduce a framework for building context awareness into standard smart-phones, which allows us to personalize the behavior of the phone--or other devices communicating remotely with the phone--using observations made about the owner's usage patterns.

In this test-bed application, we utilize data from a GPS, microphone, proximity sensor, accelerometer as well as from user interactions to predict whether the phone should have it's ringer set to silent or loud in the current context. The system must face challenges originating from noise in the sensor data, inconsistent user feedback, and it must face these challenges using only a minimal amount of battery power.

Initial results from the In-Context system will be presented, and a variety of other applications for this technology will be introduced.

Feb 14
NSH 3305High-Dimensional Structure Learning of Graphical Models: Trees, Latent trees and Beyond Anima AnandkumarElectrical Engineeering & Computer ScienceUniversity of California Irvine

Abstract

Graphical models or Markov random fields provide a graph-based framework for capturing dependencies between random variables of a large-scale multivariate distribution. This interdisciplinary topic has found widespread application in a variety of areas including image processing, bioinformatics, combinatorial optimization and machine learning. Estimating the graph structure of the model using samples drawn from it forms an important task, since the structure reveals important relationships between the variables. However, structure estimation has several challenges: in general graphical models, it is NP-hard, the models are typically in the high-dimensional regime where the number of variables is much larger than the number of samples obtained, and there could be many latent variables which are unobserved. I will address these challenges in the talk and provide solutions for certain classes of models.

I will focus on latent tree models in the first part of the talk. These are tree graphical models where there are latent variables, but there is no knowledge of the number or the location of the latent variables. We have developed novel algorithms which are consistent, computationally efficient and have low sample complexity. These algorithms are based on the presence of an additive metric on the tree, due to the properties of correlations on a tree model. The first algorithm uses these properties to check for sibling relationships between node pairs and builds the tree in a recursive fashion. The second algorithm initially builds a tree over the observed variables, and then adds hidden nodes in a step-by-step fashion by only operating on small subsets of variables. This leads to considerable computational savings compared to the first algorithm. We modify the second algorithm for experiments on real data by trading off number of added latent variables with the accuracy of resulting model fitting via the Bayesian Information Criterion (BIC). Experiments on the S&P 100 monthly returns data and on the occurrence of words in newsgroups reveal interesting relationships.

In the second part, I will talk about recent results on learning graphical models on sparse Erdos-Renyi random graphs. These random graphs are relevant in social networks. Since these graphs are locally tree-like, it is a natural question if structure learning is feasible in these models, given that learning tree models is tractable. We provide a positive answer when the model is in the so-called uniqueness regime, where there is a decay of long-range correlations. The algorithm is based on a set of conditional mutual information tests and is shown to be consistent for structure estimation with almost order-optimal sample complexity. A simpler algorithm based on correlation thresholding is also consistent, but under more stringent conditions. Finally, depending on the time availability, I will briefly mention related works on consistent estimation of high-dimensional forest distributions and the characterization of extremal tree structures with respect to error rates for structure learning.

Biography

Anima Anandkumar received her B.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Madras in 2004 and her MS and PhD degrees in Electrical Engineering from Cornell University, Ithaca, NY in 2009. She was at the Stochastic Systems Group at MIT, Cambridge, MA as a post-doctoral researcher. She has been an assistant professor at EECS Dept. at U.C.Irvine since July 2010. She is the recipient of the 2009 Best Thesis Award by the ACM Sigmetrics Society, 2008 IEEE Signal Processing Society Young Author Best Paper Award, 2008 IBM Fran Allen PhD fellowship, and student paper award at 2006 IEEE ICASSP. Her research interests are in the area of statistical-signal processing, network theory and information theory.

Feb 07
Union Support Recovery in Multi-task Learning Mladen Kolar Machine Learning DepartmentCarnegie Mellon University

Abstract

We sharply characterize the performance of different penalization schemes for the problem of selecting the relevant variables in the multi-task setting. Previous work focuses on the regression problem where conditions on the design matrix complicate the analysis. A clearer and simpler picture emerges by studying the Normal means model. This model, often used in the field of statistics, is a simplified model that provides a laboratory for studying complex procedures. These theoretical results will be presented together with implications for practitioners.

Jan 31
Bayesian Nonparametric Methods for Complex Dynamical PhenomenaEmily Fox Department of Statistical ScienceDuke University

Abstract

Markov switching processes, such as hidden Markov models (HMMs) and switching linear dynamical systems (SLDSs), are often used to describe rich classes of dynamical phenomena. They describe complex temporal behavior via repeated returns to a set of simpler models: imagine, for example, a person alternating between walking, running and jumping behaviors, or a stock index switching between regimes of high and low volatility.

Traditional modeling approaches for Markov switching processes typically assume a fixed, pre-specified number of dynamical models. Here, in constrast, we develop Bayesian nonparametric approaches that define priors on an unbounded number of potential Markov models. Using stochastic processes including the beta and Dirichlet process, we develop methods that allow the data to define the complexity of inferred classes of models, while permitting efficient computational algorithms for inference. The new methodology also has generalizations for modeling and discovery of dynamic structure shared by multiple related time series.

Interleaved througout the talk are results from studies of the NIST speaker diarization database, stochastic volatility of a stock index, the dances of honeybees, and human motion capture videos.

Jan 24
Bayesian Inference with KernelsArthur GrettonGatsby Computational Neuroscience UnitUniversity College London

Abstract

An embedding of probability distributions into a reproducing kernel Hilbert space (RKHS) has been introduced: like the characteristic function, this provides a unique representation of a probability distribution in a high dimensional feature space. This representation forms the basis of an inference procedure on graphical models, where the likelihoods are represented as RKHS functions. The resulting algorithm is completely nonparametric: all aspects of the model are represented implicitly, and learned from a training sample. Both exact inference on trees and loopy belief propagation on pairwise Markov random fields are demonstrated.

Kernel message passing can be applied to general domains where kernels are defined, handling challenging cases such as discrete variables with huge domains, or very complex, non-Gaussian continuous distributions. We apply kernel message passing and competing approaches to cross-language document retrieval, depth prediction from still images, protein configuration prediction, and paper topic inference from citation networks: these are all large-scale problems, with continuous-valued or structured random variables having complex underlying probability distributions. In all cases, kernel message passing performs outstandingly, being orders of magnitude faster than state-of-the-art nonparametric alternatives, and returning more accurate results.

Work with Danny Bickson, Kenji Fukumizu, Carlos Guestrin, Yucheng Low, Le Song

Jan 10
Efficient Triangle Counting in Large Graphs via Degree Based Vertex PartitioningCharalampos (Babis) TsourakakisDepartment of Mathematical SciencesCarnegie Mellon University

Abstract

The number of triangles is a computationally expensive graph statistic which is frequently used in complex network analysis (e.g., transitivity ratio), in various random graph models (e.g., exponential random graph model) and in important real world applications such as spam detection, uncovering of the hidden thematic structure of the Web and link recommendation. Counting triangles in graphs with millions and billions of edges requires algorithms which run fast, use small amount of space, provide accurate estimates of the number of triangles and preferably are parallelizable.

In this paper we present an efficient triangle counting algorithm which can be adapted to the semistreaming model. The key idea of our algorithm is to combine the sampling algorithm of Tsourakakis et al. and the partitioning of the set of vertices into a high degree and a low degree subset respectively as in Alon, Yuster and Zwick treating each set appropriately. We obtain a running time $O \left( m + \frac{m^{3/2} \Delta \log{n} }{t \epsilon^2} \right)$ and an $\epsilon$ approximation (multiplicative error), where $n$ is the number of vertices, $m$ the number of edges and $\Delta$ the maximum number of triangles an edge is contained. Furthermore, we show how this algorithm can be adapted to the semistreaming model with space usage $O\left(m^{1/2}\log{n} + \frac{m^{3/2} \Delta \log{n}}{t \epsilon^2} \right)$ and a constant number of passes (three) over the graph stream. We apply our methods in various networks with several millions of edges and we obtain excellent results (e.g., for the Orkut graph with ~120M edges, and ~286M triangles our method runs in ~5sec with 98% accuracy). Finally, we propose a random projection based method for triangle counting and provide a sufficient condition to obtain an estimate with low variance.

Joint work with Mihail Kolountzakis, Gary Miller and Richard Peng

Jan 10
Approximate Dynamic Programming Using Halfspace Queries and Multiscale Monge Decomposition and Denoising aCGH dataCharalampos (Babis) TsourakakisDepartment of Mathematical SciencesCarnegie Mellon University

Abstract

Dynamic Programming, since its introduction by Richard Bellman in the 1940s, is one of the most important problem solving techniques, with numerous applications in operations research, databases (histogram construction) times series analysis, speech recognition, robotics, biology and in many other fields. In this talk we will present two new techniques for performing dynamic programming approximately, for a recurrence not treated efficiently by existing methods. The basis of our first algorithm is the definition of a constant-shifted variant of the objective function that can be efficiently approximated using state of the art methods for range searching. Our technique approximates the optimal value of our objective function within additive $\epsilon$ error and runs in $\tilde{O}(n^{4/3+\delta} \log{ (\frac{U}{\epsilon}) )}$ time, where $\delta$ is an arbitrarily small positive constant and $U = \max \{ \sqrt{C},(P_i)_{i=1,\ldots,n} \}$. The second algorithm we provide solves a similar recurrence that's within a multiplicative factor of (1+$\epsilon$) and runs in $O(n \log{n} / \epsilon )$. The new technique introduced by our algorithm is the decomposition of the initial problem into a small (logarithmic) number of Monge optimization subproblems which we can speed up using existing techniques. Finally, we demonstrate a biological application of our recurrence where we obtain results superior to leading competitors both on benchmarks and real data.

Joint work with Richard Peng, David Tolliver, Maria Tsiarli, Stanley Shackney, Gary Miller and Russell Schwartz

2010 Fall Semester
Nov 15
NSH 3305Pegasus: Mining Peta-Scale GraphU KangComputer Science DepartmentCarnegie Mellon University

Abstract

In this work, we describe PeGaSus, an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node, finding the connected components, and computing the importance score of nodes. As the size of graphs reaches several Giga-, Tera- or Peta-bytes, the necessity for such a library grows too. To the best of our knowledge, PeGaSus is the first such library, implemented on the top of the Hadoop platform, the open source version of MapReduce.

Many graph mining operations (PageRank, spectral clustering, diameter estimation, connected components etc.) are essentially a repeated matrix-vector multiplication. In this paper we describe a very important primitive for PeGaSus, called GIM-V (Generalized Iterated Matrix-Vector multiplication). GIM-V is highly optimized, achieving (a) good scale-up on the number of available machines, (b) linear running time on the number of edges, and (c) more than 5 times faster performance over the non-optimized version of GIM-V.

Our experiments ran on M45, one of the top 50 supercomputers in the world. We report our findings on several real graphs, including one of the largest publicly available Web graphs, thanks to Yahoo!, with ~ 6,7 billion edges.

Nov 8
Identifying graph-structured activation patterns in networksJames SharpnackMachine Learning DepartmentCarnegie Mellon University

Abstract

We consider the problem of identifying an activation pattern in a complex, large-scale network that is embedded in very noisy measurements. This problem is relevant to several applications, such as identifying traces of a biochemical spread by a sensor network, expression levels of genes, and anomalous activity or congestion in the Internet. Extracting such patterns is a challenging task specially if the network is large (pattern is very high-dimensional) and the noise is so excessive that it masks the activity at any single node. However, typically there are statistical dependencies in the network activation process that can be leveraged to fuse the measurements of multiple nodes and enable reliable extraction of high dimensional noisy patterns. In this paper, we analyze an estimator based on the graph Laplacian eigenbasis, and establish the limits of mean square error recovery of noisy patterns arising from a probabilistic (Gaussian or Ising) model based on an arbitrary graph structure. We consider both deterministic and probabilistic network evolution models, and our results indicate that by leveraging the network interaction structure, it is possible to consistently recover high-dimensional patterns even when the noise variance increases with network size.

Biography

I am currently a Ph.D student in Machine Learning and Statistics at CMU. I have a masters in Statistics and a bachelors in Math and Physics from the Ohio State University. I have worked on exploiting structure for statistical estimation and pattern localization. Currently, I am working on optimal design and active learning for Ising and infection models, structured sparsity, and density estimation over large graphs.

Oct 25
Generalisation and Sparsity in Latent Variable ModelsShakir MohamedDepartment of EngineeringUniversity of Cambridge

Abstract

Latent variable models are an essential tool for the analysis of data in numerous application areas, whether it be source separation, image analysis, matrix completion or preference elicitation; since they provide a means of understanding the inherent structure in data. This talk will, in two parts, examine the natural evolution of latent variable models and the statistical tools - Bayesian tools in particular - used for learning with these models.

In the first part, the motivation for generalising such models to the exponential family will be given, which will allow for the modelling of data that may be binary, categorical, counts or non-negative. The focus will be on Bayesian analysis of this class of models, showing how inference is performed using Hybrid Monte Carlo sampling, the relationship between the generalised models and various other existing models, and examining aspects of model identifiability.

In the second part, the discussion will switch to the priors that are used to learn latent representations of data, and in the context of the generalised latent variable models just discussed. The focus will be on sparse Bayesian learning: weak sparsity achieved through priors based on the Gaussian scale mixture construction; and strong sparsity using discrete mixture priors. A comparison of these two methods will be given and an efficient sampler for learning with discrete mixture priors will be described, which has many advantages over a corresponding optimisation approach to learning.

Biography

Shakir Mohamed is a PhD Candidate in the Machine Learning Group at the University of Cambridge working under the supervision of Prof. Zoubin Ghahramani. At Cambridge, he is a Commonwealth scholar and a member of St John's College. His research interests lie in Bayesian statistics and latent variable models, sparse Bayesian learning and its connections to compressed sensing and learning in infinite dimensional settings, and the probabilistic modelling of tensor data, amongst others.

Oct 18
Exploiting Independence and Its Generalizations for Reasoning about Permutation DataJonathan HuangRobotics InstituteCarnegie Mellon University

Abstract

Permutations are ubiquitous in many real world problems, such as voting, rankings and data association. Representing uncertainty over permutations is challenging, however, since there are $n!$ possibilities. A pervasive technique in machine learning for making large problems tractable is to exploit probabilistic independence structures for decomposing large problems into much smaller ones. However, it is not obvious how one might exploit independence for permutation data due to the mutual exclusivity constraints which disallow, for example, two items to map to the same rank.

I will talk about recent progress on tracking and ranking problems. First, I will discuss probabilistically tracking multiple moving targets by decomposing large sets of targets into smaller independent subsets. Via independence decompositions, we are able to track much larger collections of targets Unlike multiobject tracking, distributions over rankings are typically not amenable to independence assumptions. Instead, I will present a novel generalization of independence, called \emph{riffled independence}, encompassing a more expressive family of distributions while retaining many of the properties necessary for performing efficient inference and reducing sample complexity. In riffled independence, one draws two permutations independently, then performs the \emph{riffle shuffle}, common in card games, to combine the two permutations to form a single permutation. In ranking, riffled independence corresponds to ranking disjoint sets of objects independently, then interleaving those rankings.

This is joint work with Carlos Guestrin, Leo Guibas, Xiaoye Jiang and Ashish Kapoor

Oct 11
Virus Propagation on Time-Varying Networks: Theory and Immunization AlgorithmsAditya PrakashComputer Science DepartmentCarnegie Mellon University

Abstract

Given a contact network that changes over time (say, day vs night connectivity), and the SIS (susceptible/infected/susceptible, flu like) virus propagation model, what can we say about its epidemic threshold? That is, can we determine when a small infection will "take-off" and create an epidemic? Consequently then, which nodes should we immunize to prevent an epidemic? This is a very real problem, since, e.g. people have different connections during the day at work, and during the night at home. Static graphs have been studied for a long time, with numerous analytical results. Time-evolving networks are so hard to analyze, that most existing works are simulation studies. Specifically, our contributions in this paper are: (a) we formulate the problem by approximating it by a Non-linear Dynamical system (NLDS), (b) we derive the first closed formula for the epidemic threshold of timevarying graphs under the SIS model, and finally (c) we show the usefulness of our threshold by presenting efficient heuristics and evaluate the effectiveness of our methods on synthetic and real data like the MIT reality mining graphs.

Joint work with Hanghang Tong, Nicholas Valler, Michalis Faloutsos and Christos Faloutsos

Oct 11
Discriminative Topic Modeling based on Manifold LearningSeung-il HuhRobotics InstituteCarnegie Mellon University

Abstract

Topic modeling has been popularly used for data analysis in various domains including text documents. Previous topic models, such as probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA), have shown impressive success in discovering low-rank hidden structures for modeling text documents. These models, however, do not take into account the manifold structure of data, which is generally informative for the non-linear dimensionality reduction mapping. More recent models, namely Laplacian PLSI (LapPLSI) and Locally-consistent Topic Model (LTM), have incorporated the local manifold structure into topic models and have shown the resulting benefits. But these approaches fall short of the full discriminating power of manifold learning as they only enhance the proximity between the low-rank representations of neighboring pairs without any consideration for non-neighboring pairs. In this paper, we propose Discriminative Topic Model (DTM) that separates non-neighboring pairs from each other in addition to bringing neighboring pairs closer together, thereby preserving the global manifold structure as well as improving the local consistency. We also present a novel model fitting algorithm based on the generalized EM and the concept of Pareto improvement. As a result, DTM achieves higher classification performance in a semi-supervised setting by effectively exposing the manifold structure of data. We provide empirical evidence on text corpora to demonstrate the success of DTM in terms of classification accuracy and robustness to parameters compared to state-of-the-art techniques.

Oct 4
GraphLab: A New Framework for Parallel Machine LearningJoseph GonzalezMachine Learning DepartmentCarnegie Mellon University

Abstract

High-level parallel frameworks like MapReduce (Hadoop) have begun to receive considerable attention within the machine learning (ML) community. However, while MapReduce is ideal for many large-scale data processing tasks, it does not naturally or efficiently express asynchronous iterative computation with sparse dependencies. Unfortunately, many popular machine learning algorithms like belief propagation, Gibbs sampling, CoEM, and the lasso (shooting algorithm) require asynchronous iterative computation and impose sparse parameter dependencies. To fill this critical void, we developed GraphLab which naturally expresses asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. As part of our recent UAI'10 publication, we released a shared-memory (multicore) implementation of the GraphLab API and are in the process of developing cluster and GPU versions.

In this talk I will briefly review the MapReduce abstraction and demonstrate how coercing popular iterative machine learning algorithm into MapReduce can lead to a highly inefficient parallel algorithms. I will then introduce the GraphLab framework and explain how it addresses the critical limitations of the MapReduce framework while retaining the advantages of a high-level abstraction. I will show how the GraphLab abstraction can be used to represent efficient provably correct versions of several popular sequential machine learning algorithms. Finally, I will present scaling results from our UAI'10 paper which describes our initial Shared-Memory API (currently available for download). I will conclude by briefly discussing some of our ongoing work on cluster (distributed-memory) API.

This is Joint work with: Yucheng Low, Aapo Kyrola, Kannat Tangwonsan, Danny Bickson, Carlos Guestrin, Guy Blelloch, David O'Hallaron, Joseph M. Hellerstein

Sept 27
An Interactive Learning Approach to Optimizing Information Retrieval SystemsYisong YueiLabMachine Learning DepartmentCarnegie Mellon University

Abstract

In this talk, I will describe an interactive approach for learning ranking functions. In particular, this approach leverages clickthrough data collected from online interleaving experiments.

Interleaving experiments are an effective methodology for eliciting reliable implicit feedback. For any query, the rankings computed by two retrieval functions are interleaved together and then presented to the user. Afterward, clicks can be interpreted as a relative preference of one retrieval function over the other.

I will present a novel online learning framework, called the Dueling Bandits Problem, that characterizes the exploration/exploitation trade-off of running online interleaving experiments (since they might lower current retrieval quality). For example, intuitively, we should quickly stop running experiments using low quality retrieval functions once we have discovered that they are of low quality. The Dueling Bandits Problem differs from existing online learning frameworks in that only relative information (e.g., is A better than B?) is assumed to be available to the learning algorithm.

I will also describe a learning approach to improve the statistical power of interleaving experiments. This approach is motivated by the intuition that not all clicks are equally informative. By learning a weighting function on clicks collected from interleaving experiments, we can arrive at an improved statistical test that will more efficiently tease apart the relative quality between pairs of retrieval functions.

This is joint work with Thorsten Joachims, Bobby Kleinberg, Josef Broder, Yue Gao, Olivier Chapelle and Ya Zhang.

Sept 20
NSH 3305On Sparse Nonparametric Conditional Covariance SelectionAnkur ParikhMachine Learning DepartmentCarnegie Mellon University

Abstract

We develop a penalized kernel smoothing method for the problem of selecting non-zero elements of the conditional precision matrix, known as conditional covariance selection. This problem has a key role in many modern applications such as finance and computational biology. However, it has not been properly addressed. Our estimator is derived under minimal assumptions on the underlying probability distribution and works well in the high-dimensional setting. The efficiency of the algorithm is demonstrated on both simulation studies and the analysis of the stock market.

Sept 20
NSH 3305Focused Belief Propagation for Query-Special InferenceAnton ChechetkaRobotics InstituteCarnegie Mellon University

Abstract

With the increasing popularity of large- scale probabilistic graphical models, even "lightweight" approximate inference methods are becoming infeasible. Fortunately, often large parts of the model are of no immediate interest to the end user. Given the variable that the user actually cares about, we show how to quantify edge importance in graphical models and to signi?cantly speed up infer- ence by focusing computation on important parts of the model. Our algorithm empiri- cally demonstrates convergence speedup by multiple times over state of the art.

Sept 13
Hilbert Space Embeddings of Hidden Markov Models Le Song Lane Center and Machine Learning DepartmentCarnegie Mellon University

Abstract

Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a local-minimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classi?cation data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance.

Sept 13
Predictive Latent Subspace Learning: a large-margin approach Jun Zhu Machine Learning DepartmentCarnegie Mellon University

Abstract

One potentially effective way to deal with a large collection of data samples is to discover a subspace (usually much lower dimensional and latent) representation, which can be used for further knowledge discovery or data management tasks. To automatically discover a low dimensional representation, many methods have been developed, such as the classic PCA, CCA, and the probabilistic topic models (e.g., latent Dirichlet allocation or LDA). One of the advocated advantages of such models is that they do not require supervision during training, which is arguably preferred over supervised learning that would necessitate extra cost. But with the increasing availability of free on-line information such as image tags,user ratings, etc., various forms of side-information that can potentially offer “free” supervision have led to a need for new models and training schemes that can make effective use of such information to achieve better results, such as more discriminative topic representations of image contents, and more accurate image classifiers. The standard LDA and many other models are unsupervised and ignore the commonly available supervision information,and thus can discover a sub-optimal representation for prediction tasks. Extensions to supervised models which can explore side information for discovering predictive subspace representations have been proposed and their training are typically performed with maximum likelihood estimation, which may not yield conclusive results or results in an unbalanced prediction rule. Our goal is to investigate how the arguably more discriminative maximum margin principle can be effectively applied to discover predictive subspace representations from a large collection of data, which can be bag-of-word text documents or images with multiple types of features. We aim to develop a generic learning framework for discovering predictive latent subspace representations and provide several useful tools for helping users in managing the huge amount of online content. In this talk, I will present one recent work on multi-view data analysis.

2010 Spring Semester
Apr 26
slides Density Ratio Estimation: A New Versatile Tool for Machine Learning Masashi Sugiyama Department of Computer ScienceTokyo Institute of Technology

Abstract

Recently, we developed a new ML framework that allows us to systematically avoid density estimation. The key idea is to directly estimate the ratio of density functions, not densities themselves. Our framework includes various ML tasks such as importance sampling (e.g., covariate shift adaptation, transfer learning, multitask learning), divergence estimation (e.g., two-sample test, outlier detection, change detection in time-series), mutual information estimation (e.g., independence test, independent component analysis, feature selection, sufficient dimension reduction, causal inference), and conditional probability estimation (e.g., probabilistic classification, conditional density estimation).

In this talk, I introduce the density ratio framework, review methods of density ratio estimation, and show various real-world applications including brain-computer interface, speech recognition, image recognition, and robot control.

Apr 19
NSH 3305 Fast Dynamic Reranking in Large Graphs Purnamrita Sarkar Machine Learning DepartmentCarnegie Mellon University

Abstract

We consider the problem of re-ranking search results by incorporating user feedback. We present a graph theoretic measure for discriminating irrelevant results from relevant results using a few labeled examples provided by the user. The key intuition is that nodes relatively closer (in graph topology) to the relevant nodes than the irrelevant nodes are more likely to be relevant. We present a simple sampling algorithm to evaluate this measure at specific nodes of interest, and an efficient branch and bound algorithm to compute the top k nodes from the entire graph under this measure. On quantifiable prediction tasks the introduced measure outperforms other diffusion-based proximity measures which take only the positive relevance feedback into account. On the Entity-Relation graph built from the authors and papers of the entire DBLP citation corpus (1.4 million nodes and 2.2 million edges) our branch and bound algorithm takes about 1.5 seconds to retrieve the top 10 nodes w.r.t. this measure with 10 labeled nodes.

Apr 12
Distributed Asynchronous Online Learning for Natural Language Processing Kevin Gimpel Language Technologies InstituteCarnegie Mellon University

Abstract

Considerable speed-ups to machine learning problems have been achieved by two developments: distributed computing (either on multi-core or "cloud" architectures) and rapidly converging online learning algorithms. In this talk, we combine these two. Distributed computing has largely been paired with "batch" algorithms like EM and L-BFGS, in which the entire training dataset is processed once per iterative update; our approach makes more frequent online updates asynchronously, either in a pure online or mini-batch setting. Asynchronous updates can introduce error, but the approach has similar convergence guarantees to other online learning algorithms in certain settings, such as the case of online gradient-based optimization for convex objectives. We first consider this setting, and present a series of experiments exploring practical issues for a structured prediction task in natural language processing, named-entity rec

ognition. We also consider settings that are not yet supported by theoretical results. We apply an online version of EM (Cappe and Moulines, 2009) to two unsupervised structured learning tasks: (1) word alignment for machine translation, and (2) unsupervised part-of-speech tagging. For the former we use a model that actually has a concave log-likelihood function, while the latter fits the more common unsupervised learning scenario with a non-concave objective. In both cases we find significant speed-ups over batch algorithms with no observable problems arising from the use of asynchronous updates. In addition, we present experimental results when running asynchronous mini-batch algorithms on M45, a large cluster running the Hadoop MapReduce framework. We find that, while MapReduce is not an ideal fit for these algorithms, they do converge faster than batch algorithms on the same hardware and we expect that the MapReduce framework may become more appropriate for asynchronous learning as problem sizes continue to grow.

This is joint work with Dipanjan Das and Noah Smith.

Apr 5
Genome Rearrangement Analysis on High-Resolution Data Andrew Wei Xu Laboratory for Computational Biology and BioinformaticsSwiss Federal Institute of Technology (EPFL)

Abstract

Genome rearrangement refers to large structural changes on genomes which effect their chromosomal organizations. Over the past 15 years, significant progress has been achieved for problems with two genomes. However problems with three or more genomes, e.g., to infer phylogeny and ancestral genome organizations simultaneously, had not been satisfactorily solved, due to the underlying computational difficulties.

In this talk I present my solutions to these challenging problems. I start with the problem with three genomes; then introduce a new mathematical theory, which captures its essential combinatorial structures and whose iterative application quickly finds optimal solutions to most instances. For the general problem with any number of genomes, I develop a method which extends this theory and systematically exploits information from known genomes. The new method gives extremely accurate results within a few minutes on large datasets, which are far beyond the reach of previous methods. Finally this talk presents results of the new method applied on UCSC and ENSEMBL high-resolution datasets.

Biography

Andrew Wei Xu received a Bachelor degree in Biophysics (2002, Nanjing University, China) and a Master degree in Physics (2005, Mcmaster University). Then he studied after David Sankoff and received his Ph.D. in mathematics (2008, University of Ottawa) with specification on probability theory and discrete math. Now he works with Bernard Moret as a postdoctoral fellow in the School of Computer and Communication Sciences, at Swiss Federal Institute of Technology, Lausanne. His research interests center on computational biology, with emphasis on solving genetic and genomic problems using various quantitative analysis methods, including probability analysis, statistical inference, algorithm design and combinatorial analysis. Now he is looking forward to developing and applying statistical machine learning methods to solve computational biology problems.

Mar 29
How Routine Learners can Support Family Coordination Scott Davidoff Human-Computer Interaction Institute Carnegie Mellon University

Abstract

In dual-income families, attending to the detail required to make and monitor transportation plans for kids’ activities requires much parental attention. Families rely on routines as a mechanism to support this transportation process. Families face their largest challenges, however, on non-routine days, where the effectiveness of routine resources and behavioral patterns are significantly diminished. When Dad takes over a chore Mom usually manages, for example, the task is significantly more at risk of coordination breakdowns. Especially when making or changing plans, information routine to Mom could easily be unknown to Dad.

In this talk, I will discuss work towards the creation of technologies that can support busy families during such moments. In particular, I will describe the collection of a massive dataset on family coordination, and demonstrate how only simple GPS is needed to create intelligent applications that can support family coordination. I will demonstrate how learned models of location can be used to detect when a parent has forgotten to pick up a child, and propose a variety of applications that can capture and reveal important but invisible (to people) aspects of family movement patterns. These applications open a space where Machine Learning and HCI research can collaborate to create novel and valuable services for families

Mar 22
Thresholded Lasso for High Dimensional Variable Selection and Statistical Estimation Shuheng Zhou Department of Statistics ETH Zurich

Abstract

A line of recent work has demonstrated that sparsity is a powerful technique in signal reconstruction and in statistical estimation.

Given n noisy samples with p dimensions, where n << p, we show that the multi-step thresholding procedure based on the Lasso -- we call it the Thresholded Lasso, can accurately estimate a sparse vector beta in p dimensional space in a linear model. We show that under the Restricted Eigenvalue (RE) condition (Bickel-Ritov-Tsybakov 09), it is possible to achieve the L2 loss within a logarithmic factor of the ideal mean square error one would achieve with an oracle while selecting a sufficiently sparse model -- hence achieving "sparse oracle inequalities"; the oracle would supply perfect information about which coordinates are non-zero and which are above the noise level.

Biography

Shuheng Zhou received her Ph.D. from Carnegie Mellon University in August 2006, co-advised by Professors Greg Ganger and Bruce Maggs; Her dissertation work focused on combinatorial optimization problems in network routing. She then continued as a postdoc fellow at CMU, working with Professors John Lafferty and Larry Wasserman on statistical and machine learning algorithms and theory. She has been a postdoc fellow with Seminar for statistics in Department of Mathematics at ETH Zurich, since August 2008. She is currently visiting Department of Statistics at UC Berkeley, hosted by Professor Bin Yu.

Mar 1
slides Modeling Additive Structure and Detecting Interactions with Additive Groves of Regression Trees Daria Sorokina Auton Lab Carnegie Mellon University

Abstract

A lot of research in machine learning and data mining is concentrated on building predictive models with the best possible performance. In most cases such models act as black boxes: they make good predictions, but do not provide much insight into the decision making process. This is unsatisfactory for domain scientists who also want to answer questions like: What effects do important features have on the response variable? Which features are involved in complex effects and should be studied only together with some other features? How can we visualize and interpret such complex effects? Separate post-processing techniques are needed to answer these questions.

The term statistical interaction is used to describe the presence of non-additive effects among two or more variables in a function. When variables interact, their effects must be modeled and interpreted simultaneously. Thus, detecting statistical interactions can be critical for an understanding of processes by domain researchers. In this talk I will describe an approach to interaction detection based on comparing the performance of unrestricted and restricted prediction models, where restricted models are prevented from modeling an interaction in question. I will show that an additive model-based regression ensemble, Additive Groves, can be restricted appropriately for use with this framework, and thus has the right properties for accurately detecting variable interactions.

Apart from being useful for interaction detection, Additive Groves is a strong predictive model by itself. I will show that it consistently outperforms other state-of-the-art ensembles of trees for regression and yields high performance across a variety of classification problems. In the last part of the talk I will describe several applications of Additive Groves and interaction detection techniques to real data, such as tracking environment change based on birds abundance, Salmonella risk prediction on meat-processing establishments and successful participation in recent KDD and ICDM data mining competitions.

All algorithms presented in the talk are implemented as a part of an open source C++ package available at www.cs.cmu.edu/~daria/TreeExtra.htm

Feb 22
Imitation Learning and Purposeful Prediction Drew Bagnell Assistant Research Professor Robotics Institute Carnegie Mellon University

Abstract

Programming robots is hard. While demonstrating a desired behavior may be easy, designing a system that behaves this way is often difficult, time consuming, and ultimately expensive. Machine learning promises to enable "programming by demonstration" for developing high-performance robotic systems. Unfortunately, many approaches that utilize the classical tools of supervised learning fail to meet the needs of imitation learning.

Perhaps foremost, classical statistics and supervised machine learning exist in a vacuum: predictions made by these algorithms are explicitly assumed to not affect the world in which they operate. I'll discuss the problems that result from ignoring the effect of actions influencing the world, and I'll highlight simple "reduction- based" approaches that, both in theory and in practice, mitigate these problems.

Additionally, robotic systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to poor and myopic performance. While planners have demonstrated dramatic success in applications ranging from legged locomotion to outdoor unstructured navigation, such algorithms rely on fully specified cost functions that map sensor readings and environment models to a scalar cost. Such cost functions are usually manually designed and programmed. Recently, our group has developed a set of techniques that learn these functions from human demonstration by applying an Inverse Optimal Control (IOC) approach to find a cost function for which planned behavior mimics an expert's demonstration. These approaches shed new light on the intimate connections between probabilistic inference and optimal control.

I'll consider case studies in activity forecasting of drivers and pedestrians as well as the imitation learning of robotic locomotion and rough-terrain navigation. These case-studies highlight key challenges in applying the algorithms in practical settings.

Feb 15
"New Tricks for Old Representations": Efficient Reinforcement Learning with Relational Models Thomas Walsh Computer Science Department Rutgers University

Abstract

The exploration-exploitation tradeoff is crucial to reinforcement-learning (RL) agents, and a number of sample-efficiency results guaranteeing near-optimal behavior with a relatively small amount of exploration have been derived for agents in propositional domains. But these algorithms and results do not cover many traditional AI representations that rely on relational state descriptions (like STRIPS rules). In this talk, we consider sample-efficient RL algorithms for a rich class of representations: relational action schemas. These high-level representations, combined with our sample-efficient algorithms, allow us to specify transitions in a general and compact form that is still efficiently learnable under some conditions, and have important implications in a number of real-world domains described using relational models.

Biography

Tom Walsh is a PhD candidate at Rutgers University being advised by Dr. Michael Littman. His research interests include reinforcement learning, relational representations, machine learning and data mining. His undergraduate degree and research were done at UMBC and he has also had stints at the NIST and Siemens Corporate Research.

Feb 8
Nonparametric information estimation using rank based Euclidean graph optimization methods Barnabas Poczos Computer Science Department University of Alberta

Abstract

Dependence measures between random variables play a fundamental role in many subtasks of machine learning, such as clustering, feature selection, active learning, image registration and more. One of the best-founded measures of dependence is Renyi's mutual information, a generalization of Shannon's mutual information. The question studied here is how to estimate Renyi's mutual information given a finite sample. Here we give a new estimator, show its consistency (under some regularity assumptions) and demonstrate that it is both more efficient and robust than its competitors. For illustration we use examples from image registration and independent subspace analysis. The beauty of the new estimator is that it uses the rank-statistics of the sample only, and connects copulae, information theory and Euclidean graph optimization techniques.

Biography

Barnabas Poczos received his M.Sc. degree (summa cum laude) in applied mathematics in 2001 at the Eotvos Lorand University, Budapest, Hungary. He won the first prize in the Hungarian national scientific student competition in Informatics in 2001. From 2005-2007 he was an assistant lecturer at the Eotvos Lorand University. He received there his Ph.D. (summa cum laude) in computing science in 2007. His thesis was about independent subspace analysis, a multidimensional generalization of the independent component analysis problem. He is currently a Postdoctoral Fellow at the Department of Computing Science of the University of Alberta under the supervision of Dr. Csaba Szepesvari. His areas of expertise include machine learning, unsupervised learning, Bayesian methods, active learning, bioinformatics, independent component analysis, information and entropy estimation.

2009 Fall Semester
Nov 30
Turning Down the Noise in the Blogosphere Gaurav Veda Computer Science Department Carnegie Mellon University

Abstract

In recent years, the blogosphere has experienced a substantial increase in the number of posts published daily, forcing users to cope with information overload. The task of guiding users through this flood of information has thus become critical. To address this issue, we present a principled approach for picking a set of posts that best covers the important stories in the blogosphere.

We define a simple and elegant notion of coverage and formalize it as a submodular optimization problem, for which we can efficiently compute a near-optimal solution. In addition, since people have varied interests, the ideal coverage algorithm should incorporate user preferences in order to tailor the selected posts to individual tastes. We define the problem of learning a personalized coverage function by providing an appropriate user-interaction model and formalizing an online learning framework for this task. We then provide a no-regret algorithm which can quickly learn a user's preferences from limited feedback.

Nov 16
First-Order Mixed Integer Linear Programming Sue Ann Hong Computer Science Department Carnegie Mellon University

Abstract

Mixed integer linear programming (MILP) is a powerful representation often used to formulate decision-making problems under uncertainty. However, it lacks a natural mechanism to reason about objects, groups of objects, and relations. First-order logic (FOL), on the other hand, excels at reasoning about groups of objects, but lacks a natural representation of uncertainty. While representing propositional logic in MILP has been extensively explored, no theory exists yet for fully combining FOL with MILP. We propose a new representation, called first-order programming or FOP, which fully subsumes both FOL and MILP. We establish formal methods for reasoning about first order programs, including a sound and complete lifted inference procedure based on Gomory cuts. Since FOP can offer exponential savings in representation size compared to FOL, and since our inference procedure can directly duplicate any FOL resolution proof, we anticipate that inference in FOP will be more tractable than inference in FOL for corresponding problems.

Nov 9
On the Geometry of Discrete Exponential Families with Application to Exponential Random Graph Models Alessandro Rinaldo Department of Statistics Carnegie Mellon University

Abstract

There has been an explosion of interest in statistical models for analyzing network data, and considerable interest in the class of exponential random graph (ERG) models.

In this talk I will relate the properties of ERG models to the properties of the broader class of discrete exponential families. I will describe a general geometric result about discrete exponential families with polyhedral support. I will show how the properties of these families can be well captured by some fundamental geometric objects of polyhedral form.

I will discuss the relevance of such results to maximum likelihood estimation, both from a theoretical and computational standpoint. I will then apply these results to the analysis of ERG models. By means of a detailed example, I will provide some characterization and a partial explanation of certain pathological features of ERG models known as degeneracy.

Joint work with S.E. Fienberg and Y. Zhou

Nov 2
Logic, probability, and human learning Prof. Charles Kemp Department of Psychology Carnegie Mellon University

Abstract

Different representation languages are useful for capturing different inductive biases. What is the language that best captures human inductive biases? I will present several models and studies which suggest that human knowledge is mentally represented in a logical language, and that human learning can be characterized in terms of probabilistic inference over these logical representations. The applications presented will include category learning, relational learning, and one-shot learning.

Oct 26
Set estimation for statistical inference in brain imaging and active sensing Prof. Aarti Singh Machine Learning Department Carnegie Mellon University

Abstract

Inferring spatially co-located regions of interest is an important problem in several applications, such as identifying activation regions in the brain or contamination regions in environmental monitoring. In this talk, I will present multi-resolution methods for passive and active learning of sets that aggregate data at appropriate resolutions, to achieve optimal bias and variance tradeoffs for set estimation. In the passive setting, we observe some data such as a noisy fMRI image of the brain and then extract the regions with statistically significant brain activity. Active setting, on the other hand, involves feedback where the location of an observation is decided based on the data observed in the past. This can be used for rapid extraction of set estimates, such as a contamination region in environmental monitoring, by designing data-adaptive spatial survey paths for a mobile sensor. I will describe a method that uses information gleaned from coarse surveys to focus sampling around informative regions (boundaries), thus generating successively refined multi-resolution set estimates. I will also discuss some current research directions which aim at efficient extraction of spatially distributed sets of interest by exploiting non-local dependencies in the data.

Oct 19
Applied machine learning in human-computer interaction research Moira Burke HCI Institute Carnegie Mellon University

Abstract

Human-computer interaction researchers use diverse methods—from eye-tracking to ethnography—to understand human activity, and machine learning is growing in popularity as a method within the community. This talk will survey projects from HCI researchers at CMU combining machine learning with other techniques to problems such as adapting interfaces to individuals with motor impairments, predicting routines in dual-income families, classifying controversial Wikipedia articles, and identifying rhetorical strategies newcomers use in online support groups that elicit responses. Researchers without strong computational backgrounds can face practical challenges as consumers of machine learning tools; this talk will highlight opportunities for tool design and collaboration across communities.

Oct 12
NSH 3305 Bayesian Browsing Model from Petabyte-scale Data Fan Guo Computer Science Department Carnegie Mellon University

Abstract

Different representation languages are useful for capturing different inductive biases. What is the language that best captures human inductive biases? I will present several models and studies which suggest that human knowledge is mentally represented in a logical language, and that human learning can be characterized in terms of probabilistic inference over these logical representations. The applications presented will include category learning, relational learning, and one-shot learning.

Oct 12
NSH 3305 On the Local Optimality of LambdaRank Pinar Donmez Language Technologies Institute Carnegie Mellon University

Abstract

A machine learning approach to learning to rank trains a model to optimize a target evaluation measure with respect to training data. Currently, existing information retrieval measures are impossible to optimize directly except for mod- els with a very small number of parameters. This poses a major challenge: how to optimize IR measures of interest directly? We have shown that LambdaRank, which smoothly approximates the gradient of the target measure, can be adapted to work with four popular IR target eval- uation measures using the same underlying gradient con- struction. It is likely, therefore, that this construction is extendable to other evaluation measures. We empirically show that LambdaRank finds a locally optimal solution for mean NDCG@10, mean NDCG, MAP and MRR with a 99% confidence rate. We also show that the amount of effective training data varies with IR measure and that with a suf- ficiently large training set size, matching the training op- timization measure to the target evaluation measure yields the best accuracy. In this talk, I will first review LambdaRank and then present the local optimality testing and results.

Oct 5
Graph-Guided Fused Lasso for Sparse Structured-Output Regression Seyoung Kim Machine Learning Department Carnegie Mellon University

Abstract

We consider the problem of learning structured-output regression, where the output consists of multiple response variables that are related by a graph and correlated response variables are dependent on the input variables in a sparse but synergistic fashion, rather than independently. Such problems can be found in genetic investigation of causal DNA variations that lead to perturbations of multiple related genes in a module, or in engineering problems, where complex responses must be jointly mapped to causal factors to overcome weak statistical power in independent analysis of individual responses. We propose the graph-guided fused lasso methods, which incorporate couplings among response variables on top of the standard lasso penalty for overall sparsity. Our approach represents the dependency structure among the output variables explicitly as a network, and leverages this network to encode structured regularizations in a multivariate regression model, so that the inputs that jointly influence subgroups of highly correlated outputs can be detected with high sensitivity and specificity. Experimental comparisons with standard lasso are performed on both simulated and real data, and our results show that there is a significant advantage in detecting the true relevant inputs when we incorporate the correlation pattern in outputs using our proposed methods.

Oct 5
RTG: A Recursive Realistic Graph Generator using Random Typing Leman Akoglu Computer Science Department Carnegie Mellon University

Abstract

We propose a new, recursive model to generate realistic graphs, evolving over time. Our model has the following properties: it is (a) flexible, capable of generating the cross product of weighted/unweighted, directed/ undirected, uni/bipartite graphs; (b) realistic, giving graphs that obey eleven static and dynamic laws that real graphs follow (we formally prove that for several of the (power) laws and we estimate their exponents as a function of the model parameters); (c) parsimonious, requiring only four parameters. (d) fast, being linear on the number of edges; (e) simple, intuitively leading to the generation of macroscopic patterns. We empirically show that our model mimics two real-world graphs very well: Blognet (unipartite, undirected, unweighted) with 27K nodes and 125K edges; and Committee-to-Candidate campaign donations (bipartite, directed, weighted) with 23K nodes and 880K edges. We also show how to handle time so that edge/weight additions are bursty and self-similar.

Sept 28
Towards a Principled Theory of Clustering Reza Bosagh Zadeh Language Technologies Institute Carnegie Mellon University

Abstract

Despite the widespread use of Clustering, we are only beginning to understand the general unifying principles behind clustering functions. Questions like "Are there any principles governing all clustering paradigms?" and "How should a user choose an appropriate clustering algorithm for a particular task?", etc. are almost completely unanswered by the existing body of clustering literature. We consider an axiomatic approach to the theory of Clustering. We adopt the framework of Kleinberg. By relaxing one of Kleinberg's clustering axioms, we sidestep his impossibility result and arrive at a consistent set of axioms. We use this set of axioms to put forward a theory of clustering to help answer the mentioned questions.

Sept 21
NSH 1507 ICML 2009 Conference Review Session Blockwise Coordinate Descent Procedures for the Multi-task Lasso, with Applications to Neural Semantic Basis Discovery Mark Palatucci Robotics Institute Carnegie Mellon University

Abstract

We present a cyclical blockwise coordinate descent algorithm for the multi-task Lasso that efficiently solves problems with thousands of features and tasks. The main result shows that a closed-form Winsorization operator can be obtained for the sup-norm penalized least squares regression. This allows the algorithm to find solutions to very large-scale problems far more efficiently than existing methods. This result complements the pioneering work of Friedman, et al. (2007) for the single-task Lasso. As a case study, we use the multi-task Lasso as a variable selector to discover a semantic basis for predicting human neural activation. The learned solution outperforms the standard basis for this task on the majority of test participants, while requiring far fewer assumptions about cognitive neuroscience. We demonstrate how this learned basis can yield insights into how the brain represents the meanings of words.

Sept 21
NSH 1507 ICML 2009 Conference Review Session Hilbert space embeddings of conditional distributions with applications to dynamical systems Le Song Machine Learning Department Carnegie Mellon University

Abstract

We present a cyclical blockwise coordinate descent algorithm for the multi-task Lasso that efficiently solves problems with thousands of features and tasks. The main result shows that a closed-form Winsorization operator can be obtained for the sup-norm penalized least squares regression. This allows the algorithm to find solutions to very large-scale problems far more efficiently than existing methods. This result complements the pioneering work of Friedman, et al. (2007) for the single-task Lasso. As a case study, we use the multi-task Lasso as a variable selector to discover a semantic basis for predicting human neural activation. The learned solution outperforms the standard basis for this task on the majority of test participants, while requiring far fewer assumptions about cognitive neuroscience. We demonstrate how this learned basis can yield insights into how the brain represents the meanings of words.


May 4, 2009 Wean 4623
Click Chain Model in Web Search
Fan Guo
Apr 20, 2009
A Scalable Hierarchical Distributed Language Model
Andriy Mnih
Apr 13, 2009
Discourse Structure from Topic Models in Text and Video
Jacob Eisenstein
Apr 6, 2009
Fourier Theoretic Probabilistic Inference over Permutations
Jonathan Huang
Mar 30, 2009
Uncovering, understanding, and predicting links
Jonathan Chang
Mar 23, 2009Wean 4615A
Painless function space embeddings of distributions: theory and applications
Arthur Gretton
Mar 20, 2009Friday, 3pm. Wean 4623. Snacks provided!
Use of Hash in Machine Learning
John Langford
Mar 2, 2009
Machine learning approaches for understanding the genetic basis of complex traits
Su-In Lee
Feb 23, 2009
Nonnegative Matrix Factorization for Clustering and Combinatorial Optimizations
Chris Ding
Feb 16, 2009
Large Scale Scene Matching for Graphics and Vision
James Hays
Jan 26, 2009
Deep componential models for human motion
Graham Taylor

Dec 1, 2008
3-D Point Cloud Classification with Max-Margin Markov Networks
Daniel Munoz
Nov 24, 2008
Some Challenging Machine Learning Problems in Computational Biology: Time-Varying Networks Inference and Sparse Structured Input-Out Learning
Eric Xing
Nov 17, 2008
Rare Category Detection for Spatial Data
Jingrui He
Nov 10, 2008
Differentiable Sparse Coding
David Bradley
Nov 10, 2008
Partially Observed Maximum Entropy Discrimination Markov Networks
Jun Zhu
Nov 3, 2008
Inference Complexity as Learning Bias
Pedro Domingos
Oct 27, 2008
Local Minima Free Parameterized Appearance Models
Minh Hoai Nguyen
Oct 27, 2008
Object Recognition and Segmentation by Association
Tomasz Malisiewicz
Oct 20, 2008
Probabilistic Decision-Making Under Model Uncertainty
Joelle Pineau
Oct 13, 2008 Wean 4623
Activized Learning: Transforming Passive to Active with Improved Label Complexity
Steve Hanneke
Oct 6, 2008 Wean 4623KDD 2008 Conference Review Session
Weighted Graphs and Disconnected Components: Patterns and a Generator
Mary McGlohon
Oct 6, 2008 Wean 4623KDD 2008 Conference Review Session
Efficient Parallel Learning of Linear Dynamical Systems on SMPs
Lei Li
Sept 29, 2008 UAI/ACL 2008 Conference Review Session
Feature Selection via Block-Regularized Regression
Seyoung Kim
Sept 29, 2008 UAI/ACL 2008 Conference Review Session
Exploiting document structure and feature hierarchy for semi-supervised domain adaptation
Andrew Arnold
Sept 22, 2008
Kernelized Sorting
Le Song

May 5, 2008
Learning Patterns of the Brain: Machine Learning Challenges of fMRI Analysis
Mark Palatucci
Apr 28, 2008
mStruct: Inference of population structure in light of both genetic admixing and allele mutations
Suyash Shringarpure
Apr 28, 2008
Query-Specific Learning for Graphical Models
Anton Chechetka
Apr 14, 2008
Learning Driving Route Preferences
Brian Ziebart
Apr 7, 2008
Learning Stable Linear Dynamical Systems
Sajid M. Siddiqi
Mar 3, 2008
Probability Distributions on Permutations
Jonathan Huang
Feb 25, 2008
High Dimensional Sparse Regression and Structure Estimation
Shuheng Zhou
Feb 18, 2008
Discovering Cyclic Causal Models by Independent Components Analysis
Gustavo Lacerda
Feb 11, 2008
Overview of New Developments in Boosting
Joseph Bradley
Feb 4, 2008
Relational Learning as Collective Matrix Factorization
Ajit Singh
Jan 21, 2008
Structured Prediction: Maximum Margin Techniques
Nathan Ratliff

Past talks

Related Seminars

Conferences

Lists of conferences
David Aha's ML/CBR Conference Announcements
Neural Network, Vision, And Speech Conferences
IEEE Conference Database
ICML -- International Conference on Machine Learning
KDD -- International Conference on Knowledge Discovery and Data Mining
COLT -- Conference on Computational Learning Theory
NIPS -- Neural Information Processing Systems
AAAI -- National Conference on Artificial Intelligence
IJCAI -- International Joint Conference on Artificial Intelligence
UAI -- Conference on Uncertainty in Artificial Intelligence
ILP -- International Conference on Inductive Logic Programming

Resources

General Machine Learning Resources
Reinforcement Learning Resources
Support Vector Machine Resources
Robot Learning Resources
Related Sites
Designed by Duen Horng ("Polo") Chau