NIPS08 Workshop Announcement

Parallel Implementations of Learning Algorithms:
What Have You Done For Me Lately?

December 13, 2008

Interest in parallel hardware concepts, including multicore, specialized hardware, and multimachine, has recently increased as researchers have looked to scale up their concepts to large, complex models and large datasets. In this workshop, a panel of invited speakers will present results of investigations into hardware concepts for accelerating a number of different learning and simulation algorithms. Additional contributions will be presented in poster spotlights and a poster session at the end of the one-day workshop.

Our intent is to provide a broad survey of the space of hardware approaches in order to capture the current state of activity in this venerable domain of study. Approaches to be covered include silicon, FPGA, and supercomputer architectures, for applications such as Bayesian network models of large and complex domains, simulations of cortex and other brain structures, and large-scale probabilistic algorithms.

Potential participants include researchers interested in accelerating their algorithms to handle large datasets, and systems designers providing such hardware solutions. The oral presentations will include plenty of time for questions and discussion, and the poster session at the end of the workshop will afford further opportunities for interaction among workshop participants.

Workshop Organizing Committee:

  • Robert Thibadeau, Seagate Research
  • Dan Hammerstrom, Portland State University
  • David Touretzky, Carnegie Mellon University
  • Tom Mitchell, Carnegie Mellon University

Final Program

Morning Session
Afternoon Session
7:30 AM  Introduction and overview
7:40 AM Robert Thibadeau, Seagate Research
When (And Why) Storage Devices Become Computers
(copy of slides)
8:10 AMMichael Arnold, Salk Institute
Multi-Scale Modeling in Neuroscience
8:40 AMKenneth Rice, Clemson University
A Neocortex-Inspired Cognitive Model on the Cray XD1
(copy of slides)
9:10 AMCoffee break
9:30 AMDan Hammerstrom, Portland State University
Nanoelectronics: The Original Positronic Brain?
(copy of slides)
10:00 AMClement Farabet, Cyril Poulet, and Yann LeCun, New York University; Jefferson Y. Han, Perceptive Pixel, Inc.
CNP: An FPGA-based Processor for Convolutional Networks
10:30 AMSki break
3:30 PM  David Andersen, Carnegie Mellon University
Using a Fast Array of Wimpy Nodes
4:00 PMRajat Raina and Andrew Ng, Stanford University
Learning Large Deep Belief Networks using Graphics Processors
4:30 PMDaniel R. Coates, Portland State University; Craig Rasmussen and Garret T. Kenyon, Los Alamos National Laboratory
A Bird's-Eye View of PetaVision, the World's First Petaflop/s Neural Simulation
(copy of slides)
5:00 PMCoffee break
5:20 PMPoster spotlights (4 minutes each):

Brian Tanner, University of Alberta
Reinforcement Learning Recordbook <RL@Home>

Michiel D'Haene, Benjamin Schrauwen, and Dirk Stroobandt, University of Gent
Efficient, Scalable, and Parallel Event-Drive Simulation Techniques for Complex Spiking Neuron Models

Ning-Yi Xu, Jing Yan, Rui Gao, Xiongfei Cai, Zenglin Xia, and Feng-Hsiung Hsu, Microsoft Research Asia
FPGA-based Accelerators for "Learning to Rank" in Web Search Engines

Hans Peter Graf, Srihari Cadambi, Igor Durdanovic, Venkata Jakkula, Murugan Sankardadass, Eric Cosatto, and Srimat Chakradhar, NEC Laboratories America
An FPGA-based Massively Parallel hardware Accelerator for SVM and CN

5:40 PMGeneral discussion
6:00 PMPoster session
6:30 PMAdjourn