Complete Positivity in Adversarial Information Theory
April 28, 2021 (Zoom - See email or contact organizers for link)

Abstract: Given a distribution P_{W_1, W_2, W_3}, when is it possible to construct an *arbitrarily long* sequence of random variables X_1, X_2, ..., X_M such that P_{X_i, X_j, X_k} = P_{W_1, W_2, W_3} for all 1 <= i < j < k <= M (assuming all RVs take values on the same finite alphabet)? We show that this is possible *if and only if* P_{W_1, W_2, W_3} is "completely positive", i.e., (W_1, W_2, W_3) is a mixture of i.i.d. random variables.

Our motivation comes from a seemingly disjoint subject: coding theory. It's known that a high dimensional (multiple) packing is equivalent to a uniquely (list) decodable code for an adversarial channel. For instance, an uniquely (list) decodable error-correcting code is a (multiple) packing in {0,1}^n. We consider fairly general channels, i.e., packing fairly general shapes (with multiplicity), which encodes a surprisingly large family of interesting problems, including list-decoding, list-recovery, perfect hashing, typewriter channels, Z-channels, etc. (A more delicate version of) the above theorem can be used to pin down the exact threshold below which a positive rate is achievable, i.e., an exponential-size (multiple) packing exists.

It turns out that the power of the above theorem goes beyond understanding threshold phenomena in coding theory. If time permits, I will discuss applications to other problems.

Based on joint work with Amitalok J. Budkuley (IIT Kharagpur) and Sidharth Jaggi (University of Bristol, CUHK).