At its core, coding theory studies how many elements of a (finite) vector space one can pack subject to the constraint that no two elements are too close. Typically, the notion of closeness is that of Hamming distance, that is, the distance between two vectors is the number of coordinates on which they differ. In a *rank-metric code*, codewords are matrices over a finite field and the distance between codewords is the rank of their difference. A *linear* rank-metric code is a subspace of matrices (over the field to which the matrix entries belong) such that every non-zero matrix in the subspace has large rank. Rank-metric codes have found numerous applications in random network coding, as well as magnetic recording, public-key cryptography, and space-time coding.

The typical algorithmic task for a code is that of *unique-decoding*: given a received word that may have been corrupted, the decoder should output the closest codeword. *List-decoding* is a relaxation of this notion which gives the possibility of decoding past half the minimum distance of the code at the cost of returning a (hopefully small) list of candidate codewords. List-decoding has proved to be a highly fruitful avenue of study in the Hamming case, and recently there has also been a great deal of interest in the list-decodability of rank-metric codes.

This work concerns the list-decodability of *random linear* rank-metric codes, and establishes a trade-off between and establishes a trade-off between list-size and gap to optimal decoding radius that is similar to what is known (and is straightforward to establish) for completely random rank-metric codes. Almost all known constructions of rank-metric codes are linear, and random code ensembles achieve the best known trade-offs, so it is of interest to understand the performance of random linear (rank-metric) codes.

Based on joint work with Venkatesan Guruswami.