I am a fifth-year PhD candidate at Carnegie Mellon University’s Computer Science Theory Group.

I am broadly interested in algorithms, with a focus on algorithms under uncertainty. Topics include online, dynamic and distributed algorithms.

I am fortunate to be advised by Bernhard Haeupler.

Prior to coming to CMU, I completed an MSc at the Technion, where I was fortunate to be advised by
Nir Ailon,
Seffi Naor and
Hadas Shachnai.

GHC 7001, Carnegie Mellon University

Sort by year:

Distributed graph algorithms that separately optimize for either the number of
rounds used or the total number of messages sent have been studied extensively. However,
algorithms simultaneously efficient with respect to both measures have been elusive.
For example, only very recently was it shown that for Minimum Spanning Tree
(MST), an optimal message and round complexity is achievable (up to polylog terms)
by a single algorithm in the CONGEST model of communication.

In this paper we provide algorithms that are simultaneously round- and message-optimal
for a number of well-studied distributed optimization problems. Our main
result is such a distributed algorithm for the fundamental primitive of computing simple
functions over each part of a graph partition. From this algorithm we derive round- and
message-optimal algorithms for multiple problems, including MST, Approximate Min-Cut
and Approximate Single Source Shortest Paths, among others. On general graphs
all of our algorithms achieve worst-case optimal ˜O(D+sqrt{n}) round complexity and ˜O(m)
message complexity. Furthermore, our algorithms require an optimal ˜O(D) rounds and
˜O(n) messages on planar, genus-bounded, treewidth-bounded and pathwidth-bounded
graphs.

We study the classic Bin Packing problem in
a fully-dynamic setting, where new items can arrive and old items may depart.
We want algorithms with low asymptotic competitive
ratio *while repacking items sparingly* between updates. Formally, each item
i has a *movement cost* c_{i}≥0, and we want to use
α·OPT bins and incur a movement cost
γ·c_{i}, either in the worst case, or in an amortized sense, for
α, γ as small as possible. We call γ the *recourse*
of the algorithm.
This is motivated by cloud storage applications,
where fully-dynamic Bin Packing models the problem of data
backup to minimize the number of disks used, as well as communication
incurred in moving file backups between disks.
Since the set of files changes over time, we could
recompute a solution periodically
from scratch, but this would give a high number of disk rewrites, incurring
a high energy cost and possible wear and tear of the disks. In this work, we
present optimal
tradeoffs between number of bins used and number of items repacked, as well as
natural extensions of the latter measure.

Conference PaperICALP 2018

We present a simple randomized reduction from fully-dynamic integral matching algorithms to fully-dynamic “approximately-maximal” fractional matching algorithms. Applying this reduction to the recent fractional matching algorithm of Bhattacharya, Henzinger, and Nanongkai (SODA 2017), we obtain a novel result for the integral problem. Specifically, our main result is a randomized fully-dynamic (2+ɛ)-approximate integral matching algorithm with small polylog worst-case update time. For the (2+ɛ)-approximation regime only a fractional fullydynamic (2+ɛ)-matching algorithm with worst-case polylog update time was previously known, due to Bhattacharya et al. (SODA 2017). Our algorithm is the first algorithm that maintains approximate matchings with worst-case update time better than polynomial, for any constant approximation ratio. As a consequence, we also obtain the first constant-approximate worst-case polylogarithmic update time maximum weight matching algorithm.

In this paper we study the classic online matching problem,
introduced in the seminal work of Karp, Vazirani
and Vazirani (STOC 1990), in regular graphs. For such
graphs, an optimal deterministic algorithm as well as
efficient algorithms under stochastic input assumptions
were known. In this work, we present a novel randomized
algorithm with competitive ratio tending to one
on this family of graphs, under *adversarial* arrival order.
Our main contribution is a novel algorithm which
achieves competitive ratio 1-O(sqrt(log d)/ sqrt(d))
in expectation on d-regular graphs. In contrast, we show that
all previously-studied online algorithms have competitive
ratio strictly bounded away from one. Moreover, we
show the convergence rate of our algorithm’s competitive
ratio to one is nearly tight, as no algorithm achieves
competitive ratio better than 1-O(1/sqrt(d)). Finally, we
show that our algorithm yields a similar competitive ratio
with high probability, as well as guaranteeing each
vertex a probability of being matched tending to *one*.

We present a faster distributed broadcasting primitive for
the classical radio network model.

The most basic distributed radio network broadcasting primitive - called Decay - dates back to
PODC'87 result of Bar-Yehuda, Goldreich, and Itai. In any radio network with some informed source
nodes, running Decay for O(d log n + log^2 n) rounds informs all nodes at most d hops away from a
source with high probability. Since 1987 this primitive has been the most important building block
for implementing many other functionalities in radio networks. The only improvements to this
decades-old algorithm are slight variations due to [Czumaj, Rytter; FOCS'03] and [Kowalski and Pelc,
PODC'03] which achieve the same functionality in O(d log(n/d) + log^2 n) rounds. To obtain a speedup
from this, d and thus also the network diameter need to be near linear, i.e., larger than n^(1-eps).

Our new distributed primitive spreads messages for d hops in
O(d (log n log log n)/(log d) + log^{O(1)} n) rounds with high probability.
This improves over Decay for any super-polylogarithmic d = log^{omega(1)} n and achieves
near-optimal O(d log log n) running time for d = n^eps.
This also makes progress on an open question of Peleg.

Conference PaperEC 2015. **Invited to TEAC Special Issue for EC 15**

Motivated by Internet targeted advertising, we address several ad allocation problems. Prior work
has established these problems admit no randomized online algorithm better than (1-1/e)-competitive
([Karp et al. 1990; Mehta et al. 2007]), yet simple heuristics have been observed to perform much
better in practice. We explain this phenomenon by studying a generalization of the bounded-degree
inputs considered by [Buchbinder et al. 2007), graphs which we call (k,d)-bounded. In such graphs
the maximal degree on the online side is at most d and the minimal degree on the offline side is at
least k. We prove that for such graphs, these problems' natural greedy algorithms attain competitive
ratio 1-(d-1)/(k+d-1), tending to one as d/k tends to zero. We prove this bound is tight for these
algorithms.

Next, we develop deterministic primal-dual algorithms for the above problems achieving competitive
ratio 1-(1-1/d)k>1-1/e^{k/d}, or exponentially better loss as a function of k/d, and strictly better
than 1-1/e whenever k ≥ d. We complement our lower bounds with matching upper bounds for the vertex-
weighted problem. Finally, we use our deterministic algorithms to prove by dual-fitting that simple
randomized algorithms achieve the same bounds in expectation. Our algorithms and analysis differ
from previous ad allocation algorithms, which largely scale bids based on the spent fraction of
their bidder's budget, whereas we scale bids according to the number of times the bidder could have
spent as much as her current bid. Our algorithms differ from previous online primal-dual algorithms,
as they do not maintain dual feasibility, but only primal-to-dual ratio, and only attain dual
feasibility upon termination. We believe our techniques could find applications to other well-
behaved online packing problems.

Conference PaperWWW (Companion Volume) 2015

The majority of Web email is known to be generated by machines even when one excludes spam. Many
machine-generated email messages such as invoices or travel itineraries are critical to users.
Recent research studies establish that causality relations between certain types of machine-
generated email messages exist and can be mined. These relations exhibit a link between a given
message to a past message that gave rise to its creation. For example, a shipment notification
message can often be linked to a past online purchase message. Instead of studying how an incoming
message can be linked to the past, we propose here to focus on predicting future email arrival as
implied by causality relations. Such a prediction method has several potential applications, ranging
from improved ad targeting in up sell scenarios to reducing false positives in spam detection.

We introduce a novel approach for predicting which types of machine-generated email messages,
represented by so-called "email templates", a user should receive in future time windows. Our
prediction approach relies on (1) statistically inferring causality relations between email
templates, (2) building a generative model that explains the inbox of each user using those
causality relations, and (3) combining those results to predict which email templates are likely to
appear in future time frames. We present preliminary experimental results and some data insights
obtained by analyzing several million inboxes of Yahoo Mail users, who voluntarily opted-in for such
research.

In many computational and economic models of multi-agent interaction, each participant repeatedly "best-responds" to the others' actions. Game theory research on the prominent "best-response dynamics" model typically relies on the premise that the interaction between agents is somehow synchronized. However, in many real-life settings, e.g., internet protocols and large-scale markets, the interaction between participants is asynchronous. We tackle the following important questions: (1) When are best-response dynamics guaranteed to converge to an equilibrium even under asynchrony? (2) What is the (computational and communication) complexity of verifying guaranteed convergence? We show that, in general, verifying guaranteed convergence is intractable. In fact, our main negative result establishes that this task is undecidable. We exhibit, in contrast, positive results for several environments of interest, including complete, computationally-tractable, characterizations of convergent systems. We discuss the algorithmic implications of our results, which extend beyond best-response dynamics to applications such as asynchronous Boolean circuits.

Journal Paper Discrete Mathematics & Theoretical Computer Science 2011

Given a graph G=(V,E) and a weight function w:E →R, a coloring of vertices of G, induced by w, is defined by χw(v) = ∑_{e∋v} w(e) for all v∈V. In this paper, we show that determining whether a particular graph has a weighting of the edges from {1,2} that induces a proper vertex coloring is NP-complete.

"Education's purpose is to replace an empty mind with an open one."

- Malcolm Forbes.

I have had the pleasure of teaching various courses, both at the Technion and CMU.

__At Carnegie Mellon__

**Probability and Computing** (15-359/659):

Spring 2015

__At the Technion:__

**Data Structures 1** (234218):

/ôrˈTHōəpē

1. The correct or accepted pronunciation of words.

2. The study of correct or accepted pronunciation.

``How do you pronounce 'Wajc'?", you ask?

**Answer: The second syllable of the words e-vites and invites**.

Using the International Phonetic Alphabet, Wajc would be spelled [vajts].

Still unsure? Check out this recording of Kira Radinsky:

Here's a subset of a Polish transcription table to explain how those letters are supposed to represent those sounds.

Ww | v | as in "vat" |

Aa | o | as in "hot" |

Jj | y | as in "yes" |

Cc | ts | as in "bits" |

And before you ask: no, I don't speak Polish. Here's a relevant meme by Krzysztof Onak: