go back to the main page

 SSS Abstracts 
Spring 2015

go back to the main page

Quadratic Encoding for Hand Pose Reconstruction from Multi-Touch Input

Monday, February 23rd, 2015 from 12-1 pm in GHC 6501.

Presented by Se-Joon Chung, CSD

Virtual reality is getting more traction nowadays with the advent of consumer level immersive display device such as Oculus Rift. One of the most compelling challenges in virtual reality is to allow users to carry out virtual manipulation tasks using their hands. Multi-touch devices are an interesting interface for this task, as they are widely available, they provide users with some haptic sensation of their motions, and they give very precise locations of the fingertips. In this talk, I will introduce a quadratic encoding technique to provide plausible and smooth hand reconstructions from multi-touch input at real-time rates suitable for virtual reality applications. Another nice feature of this data-driven approach is that it does not require explicit identification or registration of fingers. I will show that quadratic encoding outperforms linear encoding, cubic encoding, and a PCA based inverse kinematics approach, and is well suited for performing real-time virtual manipulation using a multi-touch device.

In Partial Fulfillment of the Speaking Requirement


Source Addresses: Can't Live with Them, Can't Live without Them

Monday, March 16th, 2015 from 12-1 pm in GHC 6501.

Presented by David Naylor, CSD

Ask someone in the networking community, "What is a source address?", and they will probably shrug because the answer seems obvious. However, the question is by no means trivial. A closer inspection reveals source addresses are severely overloaded and this can cause problems.

In this talk, we (1) identify five distinct roles currently played by a single source address, (2) describe some problems created by this ambiguity, and (3) present the design and evaluation of a network architecture that solves one of these problems: the tussle between accountability and privacy.

In Partial Fulfillment of the Speaking Requirement.


FutureMatch: Combining Human Value Judgments and Machine Learning to Match in Dynamic Environments

Friday, March 20th, 2015 from 12-1 pm in GHC 6501.

Presented by John Dickerson, CSD

The preferred treatment for kidney failure is a transplant; however, demand for donor kidneys far outstrips supply. Kidney exchange, an innovation where willing but incompatible patient-donor pairs can exchange organs---via barter cycles and altruist-initiated chains---provides a life-saving alternative. Typically, fielded exchanges act myopically, considering only the current pool of pairs when planning the cycles and chains. Yet kidney exchange is inherently dynamic, with participants arriving and departing. Also, many planned exchange transplants do not go to surgery due to various failures. So, it is important to consider the future when matching.

Motivated by our experience running the computational side of a large nationwide kidney exchange, we present FutureMatch, a framework for learning to match in a general dynamic model. FutureMatch takes as input a high-level objective (e.g., "maximize graft survival of transplants over time") decided on by experts, then automatically (i) learns based on data how to make this objective concrete and (ii) learns the "means" to accomplish this goal---a task, in our experience, that humans handle poorly. It uses data from all live kidney transplants in the US since 1987 to learn the quality of each possible match; it then learns the potentials of elements of the current input graph offline (e.g., potentials of pairs based on features such as donor and patient blood types), translates these to weights, and performs a computationally feasible batch matching that incorporates dynamic, failure-aware considerations through the weights.

We validate FutureMatch on real fielded exchange data. It results in higher values of the objective. Furthermore, even under economically inefficient objectives that enforce equity, it yields better solutions for the efficient objective (which does not incorporate equity) than traditional myopic matching that uses the efficiency objective.

In Partial Fulfillment of the Speaking Requirement.


KB-LDA: Jointly Learning a Knowledge Base of Hierarchy, Relations, and Facts

Monday, March 30th, 2015 from 12-1 pm in GHC 6501.

Presented by Dana Movshovitz-Attias, CSD

Knowledge bases (KBs) are large repositories of structured knowledge. In many existing KBs, data is organized according to a manually defined schema, including, a hierarchy of interesting categories (e.g., "People", "Animals") and relations between the categories (e.g., <"People", "raise", "Animals">). The schema is given as an input to the system, which then extracts facts that match the structure (e.g., is_a("dog", "Animals")) from a collection of unstructured text documents. These structured repositories are useful in performing tasks which require high-level semantic reasoning over text, such as search and question answering, and they are widely used by all major search engines.

In this talk, we present a model that automatically learns a structure of categories and relations from an input collection of text documents, and also identifies facts that match the learned structure. Our approach is based on topic models, a learning framework through which we jointly model text and examples extracted from the text, using patterns that indicate hierarchy and relations between entities mentioned in the documents.

As a case study, we apply the model to Web documents from a specialized domain; we construct a knowledge base of software concepts, such as programming languages, version control systems, and databases.

In Partial Fulfillment of the Speaking Requirement.


Efficient and Effective Brain Activity Mining and Modeling

Friday, April 3rd, 2015 from 12-1 pm in GHC 6501.

Presented by Vagelis Papalexakis, CSD

How do different parts of your brain get activated when you see a word like "apple"? Are there patterns in the regions of the brain that "light up" on an fMRI scan for semantically similar concepts? Can we identify these patterns automatically and in an scalable and unsupervised way?

On a related note, how do different parts of your brain communicate with each other when you perform a task, like reading? This is what we call the functional connectivity of the brain and identifying it has implications to neuroscience, human and machine learning.

In this talk, we discuss efficient and scalable computational tools that are able to model the human brain, discover patterns and semantically similar brain regions, identify the human brain functional connectivity, and provide insights and discoveries that correspond to neuroscientific ground truth.

The presented work has appeared in SDM 2014 and KDD 2014.

In Partial Fulfillment of the Speaking Requirement.


Incorporating Word Correlation Knowledge into Topic Modeling

Monday, April 6th, 2015 from 12-1 pm in GHC 6501.

Presented by Pengtao Xie, LTI

In this talk, I will introduce how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF.

In Partial Fulfillment of the Speaking Requirement


ACCAMS: Additive Co-Clustering to Approximate Matrices Succinctly

Wednesday, April 8th, 2015 from 12-1 pm in GHC 8102.

Presented by Alex Beutel, CSD

Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix.

Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity.

In Partial Fulfillment of the Speaking Requirement


Practical, Real-time Centralized Control for Live Video Delivery

Monday, April 13th, 2015 from 11:45 am-12:45 pm in GHC 6501.

Presented by Matt Mukerjee, CSD

Live video delivery is expected to reach a record high of 50 Tbps this year. This surging popularity is already straining content delivery networks (CDNs) tasked with serving them, as seen in various news articles. CDNs must meet users' demands for fast start up (join times), high quality (bitrate), and minimal buffering (low buffering ratios), while minimizing their own cost of delivery and responding to issues in realtime. Latency across the Internet, loss, and failures, as well as varied workloads (from "mega-events" like the world cup to user-generated streams with long-tail distributions) make meeting these demands challenging.

Previous analysis of a large collection of video sessions concluded that a centralized video controller algorithm could theoretically improve user experience, but traditional CDN systems have shied away from centralized designs due to the difficulty of quickly handling failures (caused by the latency of centralized decision making), and thus meeting the requirements of both CDN operators and users. We introduce a practical approach to a video delivery network, "VDN", that uses a centralized algorithm for live video delivery optimization, additionally providing CDN operators with real-time, fine-grained control. Despite many challenges (e.g., state inconsistency, partitions, failures), VDN successfully uses centralized optimization in the wide-area through a carefully designed combination of centralized and distributed control, which we dub 'hybrid control'. VDN increases average quality (bitrate) by 2x and decreases delivery cost by 2x.

In Partial Fulfillment of the Speaking Requirement.


Beyond-Block communities: algorithms and discoveries

Friday, April 17th, 2015 from 12-1 pm in GHC 6501.

Presented by Miguel Araujo, CSD

What do real communities in social networks look like? How can we find them efficiently? Community detection plays a key role in understanding the structure of real-life graphs with impact on recommendation systems, load balancing and routing. Previous community detection methods look for uniform blocks in adjacency matrices, but after studying four real networks with ground-truth communities, we provide empirical evidence that communities are best represented as having hyperbolic structure.

Our new matrix decomposition method is able to describe binary data and combines high interpretability and low reconstruction error by finding non-negative factors that are combined in a binary reconstruction.

In Partial Fulfillment of the Speaking Requirement.


Automatically Splitting a Staged Lambda Calculus

Monday, April 20th, 2015 from 12-1 pm in GHC 6501.

Presented by Nicolas Feltman, CSD

Staged programming languages assign a stage to each expression and evaluate each expression in its assigned stage. While staged programming has been studied extensively in the areas of partial evaluation and meta programming, in this talk we present an algorithm for statically splitting programs written in a two-stage typed lambda calculus (with a circle modality denoting computation in a later stage) into multi-pass programs that are structured as two dependent, unstaged programs and are evaluated using two distinct execution passes. While splitting has been studied in the past under the names "pass separation" and "data specialization", such techniques were either manual or applicable only to simple programs that lack important features such as recursion and first-class functions.

Since this work builds on a lambda calculus, our splitting algorithm enables the programmer to write staged programs by using powerful programming abstractions in a composable and modular fashion. Our splitting transformation maps such programs into multi-pass programs that can run significantly more efficiently by assigning frequent computations to earlier passes so that they can be reused (cheaply) later. Our experiments based on a prototype implementation show that our splitting algorithm is able to perform highly non-trivial algorithmic transformations, sometimes automatically improving efficiency asymptotically by nearly a linear factor. Since the splitting transformation takes place at compile time, these benefits are achieved without significant run-time penalty.

Joint work with Carlo Anguili, Umut Acar and Kayvon Fatahalian

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.


Bridging Theory and Practice in Writing Interactive Programs

Friday, April 24th, 2015 from 12-1 pm in GHC 6501.

Presented by Stefan Muller, CSD

Many real-world programs interact with the user or external environment using a graphical user interface, a command line, a network socket, external sensors, and so on. Many of these programs are also concurrent, interacting with a number of input and output sources simultaneously. Several techniques exist for writing such programs, but most are either too low-level to yield robust, maintainable code, or too high-level to offer strong performance guarantees.

We propose an abstraction for designing and implementing concurrent interactive applications which occupies an intermediate level of abstraction. This abstraction, which we call a factor, gives a uniform interface to many different sources of interaction and may be used synchronously or asynchronously, giving programmers a high degree of control over when and how input is sampled and used.

We have implemented factors as a library for OCaml and show that the approach is practical by demonstrating a number of applications developed in the library. We have also developed a software framework for assessing the responsiveness of interactive applications and show our proposed abstractions to be performant by use of a set of microbenchmarks that enable quantitative analysis. Finally, we discuss ongoing research into the theory of programs written using factors, which we hope to use to recover correctness guarantees even in the presence of interaction effects and concurrency.

Joint work with Umut Acar and Bob Harper.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.


Predicting Buggy versus Fixed Programs Using Dynamic Binary Instrumentation

Monday, April 27th, 2015 from 12-1 pm in GHC 6501.

Presented by Deborah Katz, CSD

Automated bug repair techniques propose patches to buggy programs with the goal that, when applied to the program, the patches result in a program that lacks the targeted bug. These techniques currently rely on test cases - the execution of the program under repair with test inputs followed by a determination of whether the output is correct - to judge whether or not a bug has been repaired. However, using test cases in this manner has problems. Notably, in order to determine whether a program has passed or failed a test case, the automated bug fixing system must know the expected correct output of the program for the given test input.

In this talk, I describe ongoing work into alternate methods for determining whether a program is contains or lacks a targeted bug. Specifically, I describe the use of dynamic binary instrumentation to collect information about the execution of a program. Dynamic binary instrumentation techniques can collect information such as the number of instructions executed during the run of a program, the number of memory reads and writes carried out, and the value of the program counter throughout the run of the program. I then describe techniques for using this collected information as part of a determination of whether a program is contains or lacks a bug.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.


Validating Semantics for Weak Memory by Verifying Programs

Friday, May 1st, 2015 from 12-1 pm in GHC 6501.

Presented by Joseph Tassarotti, CSD

The latest C language standard includes weak memory operations, which expose some of the counter intuitive behaviors of memory on modern processors. How do we evaluate and assess these extensions to the language? Since a formal semantics for a language acts as a contract between compilers and programmers, there are at least two important tasks that the standard should enable. First, compiler writers should be able to determine whether particular translations and optimizations are correct. Second, programmers should be able to write and reason about programs without having to understand how the compiler works.

Although the first criterion has been the focus of much scrutiny, the second is less well answered. How can we determine whether programmers are able to write and reason about programs that use these weak memory operations? A proposal going back to Floyd and Hoare is that we can actually just describe the semantics of a language in terms of rules for proving properties of programs. If we have such a semantics, then we can demonstrate that it's possible to reason about programs by simply using the rules to prove the correctness of programs.

Recently Turon, Vafeiadis, and Dreyer have created a program logic called GPS for reasoning about a fragment of the C weak memory operations. We have used GPS to prove the correctness of Read-Copy-Update (RCU), an important technique for reader-writer concurrency that uses weak memory operations. This provides some evidence that the new standard provides enough guarantees to allow programmers to write correct programs. In this talk we will start by giving a brief overview of the C weak memory model and the program logics that GPS builds on. Then we will explain the core features of GPS and how they are used. Finally, we will describe how we use GPS to prove the correctness of RCU.

This talk is based on joint work with Derek Dreyer and Viktor Vafeiadis.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.


RMC: A New Approach for Programming Relaxed Memory Concurrency

Monday, May 4th, 2015 from 12-1 pm in GHC 6501.

Presented by Michael Sullivan, CSD

Writing programs with shared memory concurrency is difficult even under the best of circumstances, in which accesses to memory are sequentially consistent; that is, when threads can be viewed as strictly interleaving accesses to a shared memory. Unfortunately, sequential consistency can be violated by CPU out-of-order execution and memory subsystems as well as many very standard compiler optimizations.

Traditionally, languages approach this by guaranteeing that data race free code will behave in a sequentially consistent manner. Programmers can then use locks and other techniques to synchronize between threads and rule out data races. However, for performance critical code and library implementation this may not be good enough, requiring languages that target these domains to provide a well-defined low-level mechanism for shared memory concurrency. C and C++ (since the C++11 and C11) standards provide a mechanism based around specifying "memory orderings" when accessing concurrently modified locations. These memory orderings induce constraints that constrain the behavior of programs.

In this talk, we propose a much different approach: to have the programmer explicitly specify constraints on the order of execution of operations and on the visibility of memory writes. The compiler then is responsible for ensuring these constraints hold. We will discuss this core idea, its realization in C/C++, some potential use cases, and the basics of the compiler implementation.

Joint work with Karl Crary

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement


Web contact: sss+www@cs