## Tuesday, Apr 06, 2021

Time: 12:00 - 01:00 PM ET
**Relevant Paper(s)**:

**Abstract**: Invariant feature learning has become a popular alternative to Empirical Risk Minimization as practitioners recognize the need to ignore features which may be misleading at test time in order to improve out-of-distribution generalization. Early results in this area leverage variation across environments to provably identify the features which are directly causal with respect to the target variable. More recent work attempts to use this technique for deep learning, frequently with no formal guarantees of an algorithm's ability to uncover the correct features. Most notably, the seminal work introducing Invariant Risk Minimization gave a loose bound for the linear setting and no results at all for non-linearity; despite this, a large number of variations have been suggested. In this talk, I'll introduce a formal latent variable model which encodes the primary assumptions made by these works. I'll then give the first characterization of the optimal solution to the IRM objective, deriving the exact number of environments needed for the solution to generalize in the linear case. Finally, I'll present the first analysis of IRM when the observed data is a non-linear function of the latent variables: in particular, we show that IRM can fail catastrophically when the test distribution is even moderately different from the training distribution - this is exactly the problem that IRM was intended to solve. These results easily generalize to all recent variations on IRM, demonstrating that these works on invariant feature learning fundamentally do not improve over standard ERM. This talk is based on work with Pradeep Ravikumar and Andrej Risteski, to appear at ICLR 2021.

**Bio**: **Elan Rosenfeld** is a PhD student in the Machine Learning Department at CMU, advised by Andrej Risteski and Pradeep Ravikumar. He is interested in theoretical foundations of machine learning, with a particular focus on robust learning, representation learning and out-of-distribution generalization. Elan completed his undergraduate degrees in Computer Science and Statistics & Machine Learning at CMU, where his senior thesis on human-usable password schemas was advised by Manuel Blum and Santosh Vempala.