>> Publications


Home | Toggle Abstract
  1. Braess's Paradox in Wireless Networks: The Danger of Improved Technology
    DISC '13
    | arxiv
    (with Merav Parter)
    When comparing new wireless technologies, it is common to consider the effect that they have on the capacity of the network (defined as the maximum number of simultaneously satisfiable links). For example, it has been shown that giving receivers the ability to do interference cancellation, or allowing transmitters to use power control, never decreases the capacity and can in certain cases increase it by $\Omega(\log (\Delta \cdot P _{\max}))$, where $\Delta$ is the ratio of the longest link length to the smallest transmitter-receiver distance and $P_{\max}$ is the maximum transmission power. But there is no reason to expect the optimal capacity to be realized in practice, particularly since maximizing the capacity is known to be NP-hard. In reality, we would expect links to behave as self-interested agents, and thus it makes more sense to compare the values reached at game-theoretic notions of equilibria than the optimum values.

    In this paper we initiate this line of work by comparing various notions of equilibria (particularly Nash equilibria and no-regret behavior) when using a supposedly "better" technology. We show a version of Braess's Paradox for all of them: in certain networks, upgrading technology can actually make the equilibria worse, despite an increase in the capacity. We construct instances where this decrease is a constant factor for power control, interference cancellation, and improvements in the SINR threshold ($\beta$), and is $\Omega(\log \Delta)$ when power control is combined with interference cancellation. However, we show that these examples are basically tight: the decrease is at most $O(1)$ for power control, interference cancellation, and improved $\beta$, and is at most $O(\log \Delta)$ when power control is combined with interference cancellation.
  2. Packing Interdiction and Partial Covering Problems
    IPCO '13
    | LNCS
    (with Anupam Gupta)
    In the Packing Interdiction problem we are given a packing LP together with a separate interdiction cost for each LP variable and a global interdiction budget. Our goal is to harm the LP: which variables should we forbid the LP from using (subject to forbidding variables of total interdiction cost at most the budget) in order to minimize the value of the resulting LP? There is a long line of work on interdiction problems in graphs (interdicting the maximum flow, the shortest path, the minimum spanning tree, etc.), but we are the first to consider the problem of interdicting linear programs. The motivation for packing interdiction is matching interdiction, in which we are given a graph with edge weights and edge costs, and try to destroy edges of total cost at most some value B in order to minimize the maximum weight matching of the resulting graph. Zenklusen showed that this problem is NP-hard and gave a 4-approximation when all edge weights are unit. We obtain an O(1)-approximation to the general matching interdiction problem (without the unit weight assumption) as a corollary of our main result, an O(log q \cdot min{q, log k})-approximation to Packing Interdiction where q is the row-sparsity of the packing LP and k is the column-sparsity.
  3. Matroid Secretary for Regular and Decomposable Matroids
    SODA '13
    | arxiv
    (with Guy Kortsarz)
    In the matroid secretary problem we are given a stream of elements and asked to choose a set of elements that maximizes the total value of the set, subject to being an independent set of a matroid given in advance. The difficulty comes from the assumption that decisions are irrevocable: if we choose to accept an element when it is presented by the stream then we can never get rid of it, and if we choose not to accept it then we cannot later add it. Babaioff, Immorlica, and Kleinberg [SODA 2007] introduced this problem, gave O(1)-competitive algorithms for certain classes of matroids, and conjectured that every matroid admits an O(1)-competitive algorithm. However, most matroids that are known to admit an O(1)-competitive algorithm can be easily represented using graphs (e.g. graphic and transversal matroids). In particular, there is very little known about F-representable matroids (the class of matroids that can be represented as elements of a vector space over a field F), which are one of the foundational matroid classes. Moreover, most of the known techniques are as dependent on graph theory as they are on matroid theory. We go beyond graphs by giving an O(1)-competitive algorithm for regular matroids (the class of matroids that are representable over every field), and use techniques that are matroid-theoretic rather than graph-theoretic. We use the regular matroid decomposition theorem of Seymour to decompose any regular matroid into matroids which are either graphic, cographic, or isomorphic to R_{10}, and then show how to combine algorithms for these basic classes into an algorithm for regular matroids. This allows us to generalize beyond regular matroids to any class of matroids that admits such a decomposition into classes for which we already have good algorithms. In particular, we give an O(1)-competitive algorithm for the class of max-flow min-cut matroids.
  4. Everywhere-Sparse Spanners via Dense Subgraphs
    FOCS '12
    (with Eden Chlamtac and Robert Krauthgamer)
    Recent significant progress in constructing graph spanners that are sparse (small number of edges) or light (low total weight) has skipped everywhere-sparse spanners (small maximum degree). Similar to other network design problems, this maximum degree objective has turned out to be challenging despite its mathematical simplicity and usefulness in applications. In the Lowest Degree 2-Spanner (LD2S) problem, the goal is to compute a 2-spanner of an input graph so as to minimize the maximum degree. We design a polynomial-time algorithm achieving approximation factor Õ(Δ^(3-2√2)) ≈ Õ(Δ^(0.172)), where Δ is the maximum degree of the input graph. The previous Õ(Δ^(1/4))$--approximation was proved nearly two decades ago by Kortsarz and Peleg [SODA 1994, SICOMP 1998].

    Our main conceptual contribution is establishing a formal connection between LD2S and a variant of the Densest k-Subgraph (DkS) problem. Specifically, we design for both problems strong relaxations based on Linear Programming (LP) hierarchies, and show that "faithful" randomized rounding of the DkS-variant can be used to round LD2S solutions. Our notion of faithfulness intuitively means that all vertices and edges are chosen with probability proportional to their LP value, but the precise formulation is more subtle.

    Unfortunately, the best algorithms known for DkS use the Sherali-Adams LP hierarchy in a non-faithful way [Bhaskara, Charikar, Chlamtac, Feige, and Vijayaraghavan, STOC 2010]. Our main technical contribution is to overcome this shortcoming, while still matching the gap that arises in random graphs by planting a subgraph with same log-density.

  5. IBGP and Constrained Connectivity
    APPROX '12
    | LNCS | arxiv
    (with Gordon Wilfong)
    We initiate the theoretical study of the problem of minimizing the size of an IBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than Ω(log n) and provide approximation algorithms with ratio Õ(√(n)). In addition, we give a slightly worse Õ(n^(2/3))$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems.

    The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph G=(V,E), and for every pair of vertices u,v \in V we are given a set S(u,v) ⊆ V called the safe set of the pair. The goal is to find the smallest subgraph H of G in which every pair of vertices u,v is connected by a path contained in S(u,v). We show that the IBGP problem can be reduced to the special case of Constrained Connectivity where G = K_n and safe sets are defined geometrically based on the IGP distances in the AS. We also believe that Constrained Connectivity is an interesting problem in its own right, so provide stronger hardness results (2^(log^(1-ε) n)-hardness of approximation) and integrality gaps (n^(1/3 - ε) for the general case. On the positive side, we show that Constrained Connectivity turns out to be much simpler for some interesting special cases other than iBGP: when safe sets are symmetric and hierarchical, we give a polynomial time algorithm that computes an optimal solution.

  6. Label Cover Instances with Large Girth and the Hardness of Approximating Basic k-Spanner
    ICALP '12 (Track A)
    | LNCS | arxiv
    (with Guy Kortsarz and Ran Raz)
    We study the well-known Label Cover problem under the additional requirement that problem instances have large girth. We show that if the girth is some k, the problem is roughly 2^{\log^{1-\epsilon} n/k} hard to approximate for all constant \epsilon > 0. A similar theorem was claimed by Elkin and Peleg [ICALP 2000], but their proof was later found to have a fundamental error. Thus for the problem of Label Cover with large girth we give the first non-trivial lower bound. We use the new proof to show inapproximability for the basic k-spanner problem, which is both the simplest problem in graph spanners and one of the few for which super-logarithmic hardness was not known. Assuming NP \not\subseteq BPTIME(2^{polylog(n)}), we show (roughly) that for every k \geq 3 and every constant \epsilon > 0 it is hard to approximate the basic k-spanner problem within a factor better than 2^{(\log^{1-\epsilon} n) / k}. A similar hardness for basic k-spanner was claimed by Elkin and Peleg [ICALP 2000], but the error in their analysis of Label Cover made this proof fail as well. For the basic k-spanner problem we improve the previous best lower bound of \Omega(\log n)/k by Kortsarz [1998]. Our main technique is subsampling the edges of 2-query PCPs, which allows us to reduce the degree of a PCP to be essentially equal to the soundness desired. This turns out to be enough to basically guarantee large girth.
  7. Efficient Computation of Distance Sketches in Distributed Networks
    SPAA '12
    (with Atish Das Sarma and Gopal Pandurangan)
    Distance computation (e.g., computing shortest paths) is one of the most fundamental primitives used in communication networks. The cost of effectively and accurately computing pairwise network distances can become prohibitive in large-scale networks such as the Internet and Peer-to-Peer (P2P) networks. To negotiate the rising need for very efficient distance computation at scales never imagined before, approximation techniques for numerous variants of this question have recently received significant attention in the literature. Several different areas of theoretical research have emerged centered around this problem, such as metric embeddings, distance labelings, spanners, and distance oracles. The goal is to preprocess the graph and store a small amount of information such that whenever a query for any pairwise distance is issued, the distance can be well approximated (i.e., with small stretch) very quickly in an online fashion. Specifically, the pre-processing (usually) involves storing a small sketch with each node, such that at query time only the sketches of the concerned nodes need to be looked up to compute the approximate distance.

    Techniques derived from metric embeddings have been considered extensively by the networking community, usually under the name of network coordinate systems. On the other hand, while the computation of distance oracles has received considerable attention in the context of web graphs and social networks, there has been little work towards similar algorithms within the networking community. In this paper, we present the first theoretical study of distance sketches derived from distance oracles in a distributed network. We first present a fast distributed algorithm for computing approximate distance sketches, based on a distributed implementation of the distance oracle scheme of [Thorup-Zwick, JACM 2005]. We also show how to modify this basic construction to achieve different tradeoffs between the number of pairs for which the distance estimate is accurate, the size of the sketches, and the time and message complexity necessary to compute them. These tradeoffs can then be combined to give an efficient construction of small sketches with provable average-case as well as worst-case performance. Our algorithms use only small-sized messages and hence are suitable for bandwidth-constrained networks, and can be used in various networking applications such as topology discovery and construction, token management, load balancing, monitoring overlays, and several other problems in distributed algorithms.

  8. Fault-Tolerant Spanners: Better and Simpler
    PODC '11
    |
    (with Robert Krauthgamer)
    A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults r, for all stretch bounds.

    For stretch k ≥ 3 we design a simple transformation that converts every k-spanner construction with at most f(n) edges into an r-fault-tolerant k-spanner construction with at most O(r^3 log n) ⋅ f(2n/r) edges. Applying this to standard greedy spanner constructions gives r-fault tolerant k-spanners with Õ(r^2 n^(1+(2/(k+1)))) edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on n but exponentially on r (approximately like k^r).

    For the case k=2 and unit-length edges, an O(r log n)-approximation algorithm is known from recent work of Dinitz and Krauthgamer [STOC 2011], where several spanner results are obtained using a common approach of rounding a natural flow-based linear programming relaxation. Here we use a different (stronger) LP relaxation and improve the approximation ratio to O(log n), which is, notably, independent of the number of faults r. We further strengthen this bound in terms of the maximum degree by using the Lovasz Local Lemma.

    Finally, we show that most of our constructions are inherently local by designing equivalent distributed algorithms in the LOCAL model of distributed computation.

  9. Directed Spanners via Flow-Based Linear Programs
    STOC '11
    |
    (with Robert Krauthgamer)
    We examine directed spanners through flow-based linear programming relaxations. We design an Õ(n^(2/3))-approximation algorithm for the directed k-spanner problem that works for all k ≥ 1, which is the first sublinear approximation for arbitrary edge-lengths. Even in the more restricted setting of unit edge-lengths, our algorithm improves over the previous Õ(n^(1-1/k)) approximation of Bhattacharyya et al. when k ≥ 4. For the special case of k=3 we design a different algorithm achieving an Õ(n^(1/2))-approximation, improving the previous Õ(n^(2/3)). Both of our algorithms easily extend to the fault-tolerant setting, which has recently attracted attention but not from an approximation viewpoint. We also prove a nearly matching integrality gap of Ω(n^(1/3 - ε) for any constant ε > 0.

    A virtue of all our algorithms is that they are relatively simple. Technically, we introduce a new yet natural flow-based relaxation, and show how to approximately solve it even when its size is not polynomial. The main challenge is to design a rounding scheme that "coordinates" the choices of flow-paths between the many demand pairs while using few edges overall. We achieve this, roughly speaking, by randomization at the level of vertices.

  10. Distributed Algorithms for Approximating Wireless Network Capacity
    INFOCOM '10
    |
    doi | pdf
    In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.
  11. Graphical Representations of Clutters
    Ars Combinatoria (2010)
    |
    (with Jonah Gold, Thomas Sharkey, and Lorenzo Traldi)
    We discuss the use of K-terminal networks to represent arbitrary clutters. A given clutter has many different representations, and there does not seem to be any set of simple transformations that can be used to transform one representation of a clutter into any other. We observe that for t ≥ 2 the class of clutters that can be represented using no more than t terminals is closed under minors, and has infinitely many forbidden minors.
  12. Maximizing Capacity in Arbitrary Wireless Networks in the SINR Model: Complexity and Game Theory
    INFOCOM '09
    |
    doi | pdf
    (with Matthew Andrews)
    In this paper we consider the problem of maximizing the number of supported connections in arbitrary wireless networks where a transmission is supported if and only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. The aim is to choose transmission powers for each connection so as to maximize the number of connections for which this threshold is met.

    We believe that analyzing this problem is important both in its own right and also because it arises as a subproblem in many other areas of wireless networking. We study both the complexity of the problem and also present some game theoretic results regarding capacity that is achieved by completely distributed algorithms. We also feel that this problem is intriguing since it involves both continuous aspects (i.e. choosing the transmission powers) as well as discrete aspects (i.e. which connections should be supported). Our results are:

    —We show that maximizing the number of supported connections is NP-hard, even when there is no background noise. This is in contrast to the problem of determining whether or not a given set of connections is feasible since that problem can be solved via linear programming.

    —We present a number of approximation algorithms for the problem. All of these approximation algorithms run in polynomial time and have an approximation ratio that is independent of the number of connections.

    —We examine a completely distributed algorithm and analyze it as a game in which a connection receives a positive payoff if it is successful and a negative payoff if it is unsuccessful while transmitting with nonzero power. We show that in this game there is not necessarily a pure Nash equilibrium but if such an equilibrium does exist the corresponding price of anarchy is independent of the number of connections. We also show that a mixed Nash equilibrium corresponds to a probabilistic transmission strategy and in this case such an equilibrium always exists and has a price of anarchy that is independent of the number of connections.

  13. Secretary Problems: Weights and Discounts
    SODA '09
    |
    (with Moshe Babaioff, Anupam Gupta, Nicole Immorlica, and Kunal Talwar)
    The classical secretary problem studies the problem of selecting online an element (a "secretary") with maximum value in a randomly ordered sequence. The difficulty lies in the fact that an element must be either selected or discarded upon its arrival, and this decision is irrevocable. Constant-competitive algorithms are known for the classical secretary problems (see, e.g., the survey of Freeman) and several variants. We study the following two extensions of the secretary problem:

    —In the discounted secretary problem, there is a time-dependent "discount" factor d(t), and the benefit derived from selecting an element/secretary e at time t is d(t)⋅ v(e). For this problem with arbitrary (not necessarily decreasing) functions d(t), we show a constant-competitive algorithm when the expected optimum is known in advance. With no prior knowledge, we exhibit a lower bound of Ω(log n/log log n), and give a nearly-matching O(log n)-competitive algorithm.

    —In the weighted secretary problem, up to K secretaries can be selected; when a secretary is selected (s)he must be irrevocably assigned to one of K positions, with position k having weight w(k), and assigning object/secretary e to position k has benefit w(k) ⋅ v(e). The goal is to select secretaries and assign them to positions to maximize the total benefit. We give constant-competitive algorithms for this problem.

    Most of these results can also be extended to the matroid secretary case (Babaioff et al.) for a large family of matroids with a constant-factor loss, and an O(log rank) loss for general matroids. These results are based on a reduction from various matroids to partition matroids which present a unified approach to many of the upper bounds of Babaioff et al. These problems have connections to online mechanism design (see, e.g., Hajiaghayi et al.). All our algorithms are monotone, and hence lead to truthful mechanisms for the corresponding online auction problems.

  14. Online, Dynamic, and Distributed Embeddings of Approximate Ultrametrics
    DISC '08
    |
    doi | pdf
    The theoretical computer science community has traditionally used embeddings of finite metrics as a tool in designing approximation algorithms. Recently, however, there has been considerable interest in using metric embeddings in the context of networks to allow network nodes to have more knowledge of the pairwise distances between other nodes in the network. There has also been evidence that natural network metrics like latency and bandwidth have some nice structure, and in particular come close to satisfying an ε-three point condition or an ε-four point condition. This empirical observation has motivated the study of these special metrics, including strong results about embeddings into trees and ultrametrics. Unfortunately all of the current embeddings require complete knowledge about the network up front, and so are less useful in real networks which change frequently. We give the first metric embeddings which have both low distortion and require only small changes in the structure of the embedding when the network changes. In particular, we give an embedding of semimetrics satisfying an ε-three point condition into ultrametrics with distortion (1 + ε)^(log n  +  4) and the property that any new node requires only O(n^(1/3)) amortized edge swaps, where we use the number of edge swaps as a measure of “structural change”. This notion of structural change naturally leads to small update messages in a distributed implementation in which every node has a copy of the embedding. The natural offline embedding has only (1 + ε)^(log n) distortion but can require Ω(n) amortized edge swaps per node addition. This online embedding also leads to a natural dynamic algorithm that can handle node removals as well as insertions.
  15. Compact Routing with Slack
    PODC '07
    |
    Given a weighted graph G=(V,E), a compact routing scheme is a distributed algorithm for forwarding packets from any source to any destination. The fundamental tradeoff is between the space used at each node and the stretch of the total route, measured by the multiplicative factor between the actual distance traveled and the length of the shortest possible route. We extend the normal definition with a slack parameter ε, which allows us to ignore up to εn^2 of the shortest pairwise routes and give a guarantee on the remaining ones. For constant ε we give constant stretch, polylogarithmic space schemes in the name-dependent model and in the designer-port name-independent model, and give a lower bound that proves that such schemes do not exist in the fixed-port name-independent model. In the name-dependent model we also give a gracefully degrading scheme which works simultaneously for all ε > 0 and guarantees constant average stretch with polylog space.
  16. Spanners with Slack
    ESA '06
    |
    doi | pdf
    (with T-H. Hubert Chan and Anupam Gupta)
    Given a metric (V,d), a spanner is a sparse graph whose shortest-path metric approximates the distance d to within a small multiplicative distortion. In this paper, we study the problem of spanners with slack: e.g., can we find sparse spanners where we are allowed to incur an arbitrarily large distortion on a small constant fraction of the distances, but are then required to incur only a constant (independent of n) distortion on the remaining distances? We answer this question in the affirmative, thus complementing similar recent results on embeddings with slack into ℓ p spaces. For instance, we show that if we ignore an ε fraction of the distances, we can get spanners with O(n) edges and O(log(1/ε)) distortion for the remaining distances.

    We also show how to obtain sparse and low-weight spanners with slack from existing constructions of conventional spanners, and these techniques allow us to also obtain the best known results for distance oracles and distance labelings with slack. This paper complements similar results obtained in recent research on slack embeddings into normed metric spaces.

  17. Full rank tilings of finite abelian groups
    SIAM Journal on Discrete Mathematics (2006)
    |
    doi | pdf
    A tiling of a finite abelian group G is a pair (V,A) of subsets of G such that 0 is in both V and A and every g ∈ G can be uniquely written as g=v+a with v ∈ V and a ∈ A. Tilings are a special case of normed factorizations, a type of factorization by subsets that was introduced by Hajós [Casopsis Puest Path. Rys., 74, (1949), pp. 157-162]. A tiling is said to be full rank if both V and A generate G. Cohen, Litsyn, Vardy, and Zémor [SIAM J. Discrete Math., 9 (1996), pp. 393-412] proved that any tiling of Z_2^n can be decomposed into full rank and trivial tilings. We generalize this decomposition from Z_2^n to all finite abelian groups. We also show how to generate larger full rank tilings from smaller ones, and give two sufficient conditions for a group to admit a full rank tiling, showing that many groups do admit them. In particular, we prove that if p ≥ 5 is a prime and n ≥ 4, then Z_p^n admits a full rank tiling. This bound on n is tight for 5 ≤ p ≤ 11, and is conjectured to be tight for all primes p.
  18. Enumeration of Balanced Tournament Designs on 10 points
    Journal of Combinatorial Mathematics and Combinatorial Computing (2005)
    |
    (with Jeffrey Dinitz)
    We enumerate the balanced tournament designs on 10 points (BTD(5)) and find that there are exactly 30,220,557 nonisomorphic designs. We also find that there are exactly two nonisomorphic partitioned BTD(5)'s and 8,081,114 factored BTD(5)'s on 10 points. We enumerate other classes of balanced tournament designs on 10 points and give examples of some of the more interesting ones. In 1988 Corriveau enumerated the nonisomorphic BTD(4)'s finding that there are 47 of them. This paper enumerates the next case and provides another good example of the combinatorial explosion phenomenon.