CSD

I will explain the cheat sheet technique in query complexity, and show how to use it to get a power 2.5 separation between classical and quantum computation (beating the quadratic speedup of Grover search). I'll then discuss the connections between query and communication complexity, and present recent applications of the cheat sheet technique in the communication complexity setting.

Shalev Ben-David is a PhD student in his last year at MIT.

Supported in part by the Simons Foundation

Video recording

The spectra of signed matrices have played a fundamental role in social sciences, graph theory and control theory. They have been key to understanding balance in social networks, to counting perfect matchings in bipartite graphs, and to analyzing robust stability of dynamic systems involving uncertainties. More recently, the results of Marcus et al. have shown that an efficient algorithm to find a signing of a given adjacency matrix that minimizes the largest eigenvalue could immediately lead to efficient construction of Ramanujan expanders.

Motivated by these applications, this talk investigates natural spectral properties of signed matrices and address the computational problems of identifying signings with these spectral properties. There are three main results we will talk about: (a) NP-completeness of three signing related problems with (negative) implications to efficiently constructing expander graphs, (b) a complete characterization of graphs that have all their signed adjacency matrices be singular, which implies a polynomial-time algorithm to verify whether a given matrix has a signing that is invertible, and (c) a polynomial-time algorithm to find a minimum increase in support of a given symmetric matrix so that it has an invertible signing.

About the Speaker

Mechanized reasoning has proved effective in avoiding serious mistakes in software and hardware, and yet remains unpopular in the practice of mathematics. My thesis is aimed at making mechanization easier so that more mathematicians can benefit from this technology. Particularly, I experimented with higher-dimensional types, an extension of ordinary types with a hierarchy of stacked relations, and managed to mechanize many important results from classical homotopy theory in the proof assistant Agda. My work thus suggests higher-dimensional types may help mechanize mathematical concepts.

Thesis Committee:
Robert Harper (Chair)
Frank Pfenning
Jeremy Avigad
Andrej Bauer (Univerza v Ljubljani)
Ulrik Buchholtz (Technische Universität Darmstadt)
Daniel R. Licata (Wesleyan University)

In this talk, we consider the problem of estimating the mean and covariance of a distribution from iid samples in R^n, in the presence of an eta fraction of malicious noise; this is in contrast to much recent work where the noise itself is assumed to be from a distribution of known type. We will give polynomial-time algorithms to compute estimates for the mean, covariance and operator norm of the covariance matrix, and show that the dimensional dependence of the error is optimal up to a O(\sqrt{\log n}) factor. This gives polynomial-time solutions to some of the questions studied in robust statistics. As one of the applications, this immediately enables one to do agnostic SVD.

This is a joint work with Kevin Lai and Santosh Vempala.

Due to the rapid growth of data and the ever increasing model complexity, which often manifests itself in the large number of model parameters, today, many important machine learning problems cannot be efficiently solved by a single machine. Distributed optimization  and inference is becoming more and more inevitable for solving large scale machine learning  problems in both academia and industry. However, obtaining an efficient distributed implementation of an algorithm, is far from trivial. Both intensive computational workloads and the volume of data communication demand careful design of distributed computation systems  and distributed machine learning algorithms. In this thesis, we focus on the co-design of distributed computing systems and distributed optimization algorithms that are specialized for large machine learning problems.<p>


In the first part, we propose two distributed computing frameworks: Parameter Server, a distributed machine learning framework that features efficient data communication between the machines; MXNet, a multi-language library that aims to simplify the development of deep neural network algorithms. We have witnessed the wide adoption of the two proposed systems in the past two years. They have  nabled and will continue to enable more people to harness the power of distributed  computing to design efficient large-scale machine learning applications.<p>


In the second part, we examine a number of distributed optimization problems in machine learning, leveraging the two computing platforms. We present new methods to accelerate the training process, such as data partitioning with better locality properties, communication friendly optimization methods, and more compact statistical models. We implement the new algorithms on the two systems and test on large scale real data sets. We successfully demonstrate that careful co-design of computing systems and learning algorithms can greatly accelerate large scale distributed machine learning.

Thesis Committee:
David G. Andersen (Co-Chair)
Alexander J. Smola (Co-Chair)
Ruslan Salakhutdinov
Barnabas Poczos
Jeffrey Dean (Google, Inc.)

We consider the following basic problem: given an n-variate degree-d homogeneous polynomial f with real coefficients, compute a unit vector x∈ℝn that maximizes ∣f(x)∣. Besides its fundamental nature, this problem arises in many diverse contexts ranging from tensor and operator norms to graph expansion to quantum information theory. The homogeneous degree 2 case is efficiently solvable as it corresponds to computing the spectral norm of an associated matrix, but the higher degree case is NP-hard. In this work, we give multiplicative approximation algorithms for this problem. Our algorithms leverage the tractability of the degree 2 case, and output the best solution among a carefully constructed set of quadratic polynomials. They offer a trade-off between the approximation ratio and running time, which is governed by the number of quadratic problems we search over. Specifically, in nO(q) time, we get an approximation factor of Od((n/q)d/2-1) for arbitrary polynomials, and Od((n/q)d/4-1/2) for polynomials with non-negative coefficients. The approximation guarantees are with respect to the optimum of the level-q SoS SDP relaxation of the problem. We also consider the case when f is random with independent ±1 coefficients, and prove that w.h.p the level-q SoS solution gives a certificate within factor O~d((n/q)d/4-1/2) of the optimum. We complement our algorithmic results with some polynomially large integrality gaps for d-levels of the SoS relaxation. To obtain our results, we develop general techniques which help analyze the approximation obtained by higher levels of the SoS hierarchy. We believe these techniques will also be useful in understanding polynomial optimization for other constrained settings.

Vijay Bhattiprolu is a third-year PhD student in CSD, advised by Venkatesan Guruswami and Gary Miller.

Joint work with Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee and Madhur Tulsiani.

Supported in part to the generous support of Yahoo! Labs and the Simons Foundation.

Pages

Subscribe to CSD