CSD

We propose a general class of language models that treat reference as an explicit stochastic latent variable. This architecture allows models to create mentions of entities and their attributes by accessing external databases (required by, e.g., dialogue generation and recipe generation) and internal state (required by, e.g. language models which are aware of
coreference). This facilitates the incorporation of information that can be accessed in predictable locations in databases or discourse context, even when the targets of the reference may be rare words. Experiments on three tasks shows our model variants based on deterministic attention.

Reference

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

A software watermarking scheme allows one to embed a "mark" into a program without significantly altering the behavior of the program. Moreover, it should be difficult to remove the watermark without destroying the functionality of the program. Recently, Cohen et al. (STOC 2016) and Boneh et al. (PKC 2017) showed how to watermark cryptographic functionalities such as pseudorandom functions (PRFs) using general-purpose indistinguishability obfuscation, and left open the question of realizing watermarking from weaker assumptions.

In this talk, I will describe the first construction of a watermarkable family of PRFs with strong security from standard lattice assumptions. Our construction relies on a new primitive called a private translucent PRF, which may be of independent interest.

Joint work with Sam Kim.

We consider the problem of quantum state certification, where one is given n​ copies of an unknown d​ -dimensional quantum mixed state ρ​ and one wants to test whether ρ​ is equal to some known mixed state σ​ or else is ε​-far from σ​. The goal is to use notably fewer copies than the Ω(d2)​ needed for full tomography on ρ​ (i.e., density estimation).

We give two robust state certification algorithms: one with respect to fidelity using n = O(d/ε)​ copies and one with respect to trace distance using n = O(d/ε2)​ copies. The latter algorithm also applies when σ​ is unknown as well. These copy complexities are optimal up to constant factors.

Joint work with Ryan O'Donnell and John Wright.

 

Database management systems (DBMSs) are the most important component of any data-intensive application. They can handle large amounts of data and complex workloads. But they're difficult to manage because they have hundreds of configuration "knobs" that control factors such as the amount of memory to use for caches and how often to write data to storage. Organizations often hire experts to help with tuning activities, but experts are prohibitively expensive for many. In this talk, I will present OtterTune, a new tool that can  automatically find good settings for a DBMSs configuration knobs. OtterTune differs from other DBMS configuration tools because it leverages knowledge gained from tuning previous
DBMS deployments to tune new ones. Our evaluation shows that OtterTune recommends configurations that are as good as or better than ones generated by existing tools or a human expert.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

The presence of aliasing makes modular verification of object-oriented code difficult. If multiple clients depend on the properties of an object, one client may break a property that others depend on.

In this thesis I present a modular verification approach based on the novel abstraction of object propositions, which combine predicates and information about object aliasing. In my methodology, even if shared data is modified, an object invariant specified by a client holds. My permission system allows verification using a mixture of linear and nonlinear reasoning. This allows it to provide more modularity in some cases than competing separation logic approaches, because it can more effectively hide the exact aliasing relationships within a module.

I have implemented the methodology into my tool Oprop, that can be accessed online. I have used Oprop to automatically verify a number of examples, thus showing the applicability of this research. I have verified simple examples that show particular features of the Oprop language, and also instances of different design patterns, such as the composite, state, proxy and flyweight design patterns.

Thesis Committee
Jonathan Aldrich (Chair)
Frank Pfenning
David Garlan
Matthew Parkinson (Microsoft Research Cambridge)

In this talk, we highlight two challenges present in today's deep learning landscape that involve adding structure to the input or latent space of a model. We will discuss how to overcome some of these challenges with the use of learnable optimization sub-problems that subsume standard architectures and layers. These architectures obtain state-of-the-art empirical results in many domains such as continuous action reinforcement learning and tasks that involve learning hard constraints like the game Sudoku.

We will cover topics from these two papers:
Input Convex Neural Networks, Brandon Amos, Lei Xu, J. Zico Kolter, ICML 2017

OptNet: Differentiable Optimization as a Layer in Neural Networks, Brandon Amos, J. Zico Kolter, ICML 2017
ICML 2017

Joint work with J. Zico Kolter

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

Planning under uncertainty assumes a model that specifies the probabilistic effects of actions in terms of changes of the state. Given such model, planning proceeds to determine a policy that defines the choice of action at each state that maximizes a reward function. In this work, we realize that the world may have degrees of freedom not necessarily captured in the given model, and that the planning solution based thereon may be sub-optimal when compared to those possible when the additional degrees of freedom are considered. We introduce and formalize the problem of planning while considering feasible modifications to the model of the world that would lead to improved policies. We present an approach that models changes in the world as modifications to the probability transition function, and show that the problem of computing the most rewarding feasible modifications can be reduced to a constrained optimization problem. We then contribute a gradient-based algorithm for solving this optimization problem efficiently. We foresee several possible applications of this work, including some in the field of human-robot interaction.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement

In this talk Qing presents Parallel Logging DB (PLDB), a new in-situ analysis technique for indexing data within DeltaFS. With its design as a scalable, serverless file system for HPC platforms, DeltaFS scales file system metadata performance with application scale. The new PLDB is a novel extension to the DeltaFS data plane, enabling in-situ indexing of massive amounts of data written to a single DeltaFS directory simultaneously, and in an arbitrarily large number of files. PLDB achieves this through a compaction-free indexing mechanism for reordering and indexing data, and a log-structured storage layout to pack small writes into large log objects, all while ensuring compute node resources are used frugally. We demonstrate the efficiency of our PLDB through VPIC, a widely-used simulation code developed at Los Alamos National Lab that scales to trillions of particles. With DeltaFS, we modify VPIC to create a file under a special directory for each particle to receive write! s of that particle's output data. Dynamically indexing the directory's underlying storage using PLDB allows us to achieve a 5,000x speedup in VPIC particle trajectory queries, which require reading all data for a single particle. This speedup increases with simulation scale, while theoverhead is fixed at 3% of available memory and 8% of final storage.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement

To keep pace with Moore's law, chip designers have focused on increasing the number of cores per chip rather than single core performance. In turn, modern jobs are often designed to run on any number of cores. However, to effectively leverage these multi-core chips, one must address the question of how many cores to assign to each job. Given that jobs receive sublinear speedups from additional cores, there is an obvious tradeoff: allocating more cores to an individual job reduces the job's runtime, but in turn decreases the efficiency of the overall system. We ask how the system should schedule jobs across cores so as to minimize the mean response time over a stream of incoming jobs.

To answer this question, we develop an analytical model of jobs running on a multi-core machine. We prove that EQUI, a policy which continuously divides cores evenly across jobs, is optimal when all jobs follow a single speedup curve and have exponentially distributed sizes. EQUI requires jobs to change their level of parallelization while they run. Since this is not possible for all workloads, we consider a class of "fixed-width" policies, which choose a single level of parallelization, k, to use for all jobs. We prove that, surprisingly, it is possible to achieve EQUI's performance without requiring jobs to change their levels of parallelization by using the optimal fixed level of parallelization, k*. We also show how to analytically derive the optimal k* as a function of the system load, the speedup curve, and the job size distribution.

This is joint work with Jan-Pieter Dorsman and Mor Harchol-Balter, and will appear in SIGMETRICS 2018.

CMU Youtube Theory channel

We posit that aspects of frameworks, such as inversion of control and the structure of framework applications require developers to adjust their debugging strategy as compared to debugging sequential programs. However, the benefits and challenges of framework debugging are not fully understood, and gaining this knowledge could provide guidance in debugging strategies and framework tool design. To gain insight into the process developers use to fix problems in framework applications, we performed two human studies investigating how developers fix applications that use a framework API incorrectly. These studies focused on the Android Fragment class and the ROS framework. From our analysis, we produced a theory of the benefits and challenges of framework debugging. This theory states that developers find inversion of control challenging when debugging but find the structure of framework applications helpful. This theory could be used to guide strategies for debugging framework applications and framework tool designs.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement

Pages

Subscribe to CSD