Error-correcting codes provide a systematic way to add redundancy to data so that errors that occur during transmission can be tolerated. In addition to their obvious applications in communications, codes have found numerous applications in complexity theory and cryptography, and are by now indispensable items in the toolkit of theoretical computer scientists. The basic principle in the design of codes is to ensure that distinct messages get encoded into codewords that are far apart (eg. differ in many coordinates), so that one will not be confused with the other even in the presence of some errors. The "distance" of a code quantifies the minimum separation between distinct codewords. In addition to the combinatorial challenge of finding and explicitly specifying such configurations with good distance properties, the fundamental algorithmic challenge in coding theory and practice is to efficiently recover the transmitted codeword from a noisy received word. In a nutshell, the game here is to construct codes with low redundancy together with fast algorithms to correct large amounts of noise.
Our work in coding theory has broadly been in two directions. Asymptotically, we would like to have linear time algorithms to perform encoding as well as error-correction. We have recently given constructions of codes with near-optimal trade-off between redundancy and error-correction capability together with linear time encoding and decoding algorithms. Expanders, which are a class of sparse yet highly connected graphs, play a crucial role in these constructions.
The second, more extensive, direction of work has been on "list decoding". Traditionally, decoding algorithms were required to always output a unique answer which limited them to correcting a number of errors which is at most half the distance, say d/2, of the code. However, if we relax this requirement, and even constrain the decoder to output a small number of candidate answers, error recovery well beyond the d/2 "barrier" becomes possible. Moreover, in several settings, especially in the complexity-theoretic applications, having a small set of candidates suffices and in fact exactly fits the ball (and in any case a list decoder is better than a decoder that simply gives up and reports a decoding failure when it finds no codeword within a radius d/2 from the received word).
However, for a long time there were no polynomial time algorithms known for list decoding useful codes (beyond the d/2 radius). This changed a few years back with the discovery of polynomial time list decoding algorithms for certain algebraic codes. In addition to serving as the core algorithmic results, these in turn renewed interest in questions on the exact power of list decoding, and in particular on the trade-off between number of errors that can be list decoded and the amount of redundancy needed in the code. While several good results are known, there is still a lot of progress to be made both on the combinatorial and algorithmic angles of this and related questions.
Recently we were able to construct codes with linear time list decoding algorithms: this result departed from the algberaic nature of previously known codes and thus gives us a new avenue to look for good codes. There has been much interplay between codes and other pseudorandom objects like expanders and extractors, and much more remains to be gained from these interconnections.
In addition to adversarially effected errors, one can also consider other noise models (eg. where certain fraction of codeword symbols are erased) and this raises the same questions, but with new trade-offs among the coding parameters.