CSD

Although compression has been widely used for decades to reduce file sizes (thereby conserving storage capacity and network bandwidth when transferring files), there has been little to no use of compression within modern memory hierarchies. Why not? Especially as programs become increasingly data-intensive, the capacity and bandwidth within the memory hierarchy (including caches, main memory, and their associated interconnects) are becoming increasingly important bottlenecks. If data compression could be applied successfully to the memory hierarchy, it could potentially relieve pressure on these bottlenecks by increasing effective capacity, increasing effective bandwidth, and even reducing energy consumption.

In this thesis, I describe a new, practical approach to integrating data compression within the memory hierarchy, including on-chip caches, main memory, and both on-chip and off-chip interconnects. This new approach is fast, simple, and effective in saving storage space. A key insight in our approach is that access time (including decompression latency) is critical in modern memory hierarchies. By combining inexpensive hardware support with modest OS support, our holistic approach to compression achieves substantial improvements in performance and energy efficiency across the memory hierarchy. In addition to exploring compression-related issues and enabling practical solutions in modern CPU systems, we discover new problems in realizing hardware-based compression for GPU-based systems and develop new solutions to solve these problems.

Thesis Committee:
Todd C. Mowry (Co-Chair)
Onur Mutlu (Co-Chair)
Kayvon Fatahalian
David A. Wood (University of Wisconsin-Madison)
Douglas C. Burger (Microsoft)
Michael A. Kozuch (Intel)

How can computers help ordinary people make collective decisions about real-life dilemmas, like which restaurant to go to with friends, or even how to divide an inheritance?  I will present an optimization-driven approach that draws on ideas from AI, theoretical computer science, and economic theory, and illustrate it through my research in computational social choice and computational fair division. In both areas, I will make a special effort to demonstrate how fundamental theoretical questions underlie the design and implementation of deployed services that are already used by tens of thousands of people (spliddit.org), as well as upcoming services (robovote.org).

Thesis Committee:
Ariel D. Procaccia (Chair)
Nina Balcan
Avrim Blum
Tuomas Sandholm
Vincent Conitzer (Duke University)

Error-correcting codes were originally developed in the context of reliable delivery of data over a noisy communication channel and continue to be widely used in communication and storage systems.  Over time, error-correcting codes have also been shown to have several exciting connections to  areas in theoretical computer science. Recently, there have been several advances including new constructions of efficient codes as well as coding for different settings. This thesis proposal explores
several new directions in modern coding theory. To this end, we:

  1. Provide a theoretical analysis of polar codes, which were a breakthrough made by Arikan in 2008.  We show that polar codes over prime alphabets are the first explicit construction of efficient codes to provably achieve Shannon capacity for symmetric channels with polynomially fast convergence. We  introduce interesting new techniques involving entropy sumset inequalities, which are an entropic analogue of sumset inequalities studied in additive combinatorics.
  2.  Consider the recent problem of coding for two-party interactive communication, in which two parties wish to execute a protocol over a noisy interactive channel. Specifically, we provide an explicit  interactive coding scheme for oblivious adversarial errors and bridge the gap between channel capacities for interactive communication and one-way communication. We also consider a related question involving one-way communication with partial feedback.
  3. Study the problem of list decodability for codes. We resolve an open question about the list  decodability of random linear codes and show surprising connections to the field of compressed sensing, in which we provide improved bounds on the number of frequency samples needed for exact reconstruction of sparse signals (improving upon work of Candès-Tao and Rudelson-Vershynin).
  4.  Study locally-testable codes and affine invariance in codes. Specifically, we investigate the  limitations posed by local testability, which has served as an important notion in the study of probabilistically checkable proofs (PCPs) and hardness of approximation.

Thesis Committee:
Venkatesan Guruswami (Chair)
Bernhard Haeupler
Gary Miller
Emmanuel Abbe (Princeton University)

The Immigration Course (IC) is an introduction to the Computer Science Department  organized for incoming Ph.D. students. Held during the beginning of the Fall Semester, the IC includes research area overviews, a series of short research presentations by CSD faculty, introductions to computing facilities and other department resources, and several parties and activities that allow students, staff and faculty to get to know each other socially.

All SCS Ph.D. students should have the chance to create the Next Big Thing, to engage openly in intellectual inquiry without worrying about losing their funding. We want them to freely explore. We want them to change the world.

SCS created the CSD50 Graduate Fellowship Endowment Fund to guarantee that each Ph.D. student has the support to fuel their research and exploration while at CMU. Contributions to the fund will help us attract, retain and support the best computer scientists in the world as they create the future.

When you give to the CSD50 Graduate Fellowship Endowment Fund, you not only support the future of computer science, but you can recognize the contributions of individuals with deep SCS roots. We've specifically highlighted four individuals you can honor with your gift: Sharon Burks, A. Nico Habermann, Raj Reddy and Joe Traub. You can make your gift by clicking below and specifying one of these SCS legends in the "Comments" box on the next page.

What does a person’s brain activity look like when they read the word apple? How does it differ from the activity of the same (or even a different person) when reading about an airplane? How can we identify parts of the human brain that are active for different semantic concepts? On a seemingly unrelated setting, how can we model and mine the knowledge on web (e.g., subject-verb-object triplets), in order to find hidden emerging patterns? Our proposed answer to both problems (and many more) is through bridging signal processing and large-scale multi-aspect data mining.

Specifically, language in the brain, along with many other real-word processes and phenomena, have different aspects, such as the various semantic stimuli of the brain activity (apple or airplane), the particular person whose activity we analyze, and the measurement technique. In the above example, the brain regions with high activation for “apple” will likely differ from the ones for “airplane”. Nevertheless, each aspect of the activity is a signal of the same underlying physical phenomenon: language understanding in the human brain. Taking into account all aspects of brain activity results in more accurate models that can drive scientific discovery (e.g, identifying semantically coherent brain regions).

In addition to the above Neurosemantics application, multi-aspect data appear in numerous applications such as mining knowledge on the web, where different aspects in the data include entities in a knowledge base and the links between them or search engine results for those entities, and multi-aspect graph mining, with the example of multi-view social networks, where we observe social interactions of people under different means of communication, and we use all views/aspects of the communication to extract communities more accurately.

The main thesis of our work is that many real-world problems, such as the aforementioned, benefit from jointly modeling and analyzing the multi-aspect data associated with the underlying phenomenon we seek to uncover.

In this thesis we develop scalable and interpretable algorithms for mining big multi-aspect data, with emphasis on tensor decomposition. We present algorithmic advances on scaling up and parallelizing tensor decomposition and assessing the quality of the results multi-aspect data applications, that have enabled the analysis of multi-aspect data using tensors that the state-of-the-art was unable to operate on. Furthermore, we present results on multi-aspect data applications focusing on Neurosemantics and Social Networks and the Web, demonstrating the effectiveness of modeling and mining multiple aspects of the data. We conclude with our future vision on bridging Signal Processing and Data Science for real-world applications and with concrete future directions on multi-aspect data mining algorithms and applications.

Thesis Committee:
Christos Faloutsos (Chair)
Tom Mitchell
Jeff Schneider
Nikos Sidiropoulos (University of Minnesota)

Many natural forms of communication involve two (or more) players interacting based on a shared context, where the context is huge and imperfectly shared. This large context typically helps compress communication even when the sharing is imperfect. How can we explain this phenomenon mathematically?

In this talk, we will argue that communication complexity gives the right lens to view this class of problems, and this class of problems gives rise to new questions about communication complexity. I will describe some of these questions, some partial answers and some confounding open questions.

Based on joint works with Ilan Komargodski, Pravesh Kothari and Madhu Sudan.

In this talk we present a faster distributed broadcasting primitive for the classical radio network model.
 
The most basic distributed radio network broadcasting primitive - called Decay - dates back to a PODC'87 result of Bar-Yehuda, Goldreich, and Itai. In any radio network with some informed source nodes, running Decay for $O(d \log n + \log^2 n)$ rounds informs all nodes at most $d$ hops away from a source with high probability. Since 1987 this primitive has been the most important building block for implementing many other functionalities in radio networks. The only improvements to this decades-old algorithm are slight variations due to [Czumaj, Rytter; FOCS'03] and [Kowalski and Pelc, PODC'03] which achieve the same functionality in $O(d \log \frac{n}{d} + \log^2 n)$ rounds. To obtain a speedup from this, $d$ and thus also the network diameter need to be near linear, i.e., larger than $n^{1-\eps}$.

Our new distributed primitive spreads messages for $d$ hops in $O(d \frac{\log n \log \log n}{\log d} + \log^{O(1)} n)$ rounds. This improves over Decay for any super-polylogarithmic $d = \log^{\omega(1)} n$ and achieves a near-optimal $O(d \log \log n)$ running time for $d = n^{\eps}$. This also makes progress on an open question of Peleg.

Based on joint work with Bernhard Haeupler.

David Wajc is a second-year PhD student in the computer science department at CMU, advised by Bernhard Haeupler. He is broadly interested in algorithmic aspects of theoretical computer science, and has worked on online algorithms, distributed algorithms, and AGT, among others.

Pages

Subscribe to CSD