Fast and Sloppy - Scaling up Linear Models Abstract: In this talk I present an overview over a range of methods designed to scale up linear models both in terms of model complexity and in terms of their ability to process large amounts of data. The first aspect is addressed by hashing feature vectors for both prediction and matrix factorization. The second aspect can be dealt with by parallelizing stochastic gradient descent optimization procedures. I will present an algorithm suitable for multicore parallelism. Biography: Dr. Alex Smola is Principal Researcher at Yahoo! Research, Santa Clara and Adjunct Professor at the Australian National University. Prior to that until 2008 he was Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA. He received his Diplom in Physics from the University of Technology in Munich and his Doctoral degree in Computer Science from the University of Technology in Berlin. He has worked at AT&T Research, the Fraunhofer Institute, the Australian National University, NICTA and Yahoo!. His research interest are nonparametric methods for estimation, in particular kernel methods and exponential families. This includes Support Vector Machines, Gaussian processes, and conditional random fields. He is currently working on large scale methods for document analysis and representation, such as nonparametric Bayesian models. He has organized workshops at NIPS, EUROCOLT, ICML and 5 Machine Learning Summer Schools. Moreover, he served on the senior program committee of COLT, ICML, NIPS, and AAAI. He has written one book and edited 4 books.