Neural Information Processing Systems
1994 Post Conference Workshops

December 2-3
Radisson Resort, Vail, Colorado


Friday


Novel Control Techniques from Biological Inspiration

Organizers:

Richard D. Braatz (rdb@beethoven.che.caltech.edu)
James S. Schwaber (schwaber@eslrx7.es.dupont.com)
David Touretzky (dst@CS.CMU.EDU) 
Thomas F. Enders   
K. P. Unnikrishnan (unni@neuro.cs.gmr.com)

Intended Audience:

Those interested in novel control techniques inpired from neurobiology.

Description:

The well-known control theoretician Roger Brockett recently stated that profound advances in control theory may be achieved by developing a theory of control that sheds significant light on the neuroanatomy of at least one animal. The development of such a theory is clearly a very challenging problem and many important problems remain unsolved. The objective of this workshop is to overview some recent progress on developing novel control techniques inspired from a study of biological control systems, intermixed with ample time for discussion. Such issues for discussion may include but are not limited to: 1) is our current understanding of biological systems sufficient so that reverse-engineering their attributes of robustness, reliability, and nonlinear functional behavior is now practical? 2) just how novel are the control techniques described by the presenting authors? 3) what is the future potential?
Panel participants: Martin Pottmann, DuPont
                    Babatunde A. Ogunnaike, DuPont
                    James Keeler, MCC, Austin
                    Michael A. Henson, Louisiana State University
                    Gerald Dreyfus, ESPCI, Paris
                    Francis J. Doyle, Purdue
A complementary workshop: "Open and Closed Problems in Neural Network Robotics" organized by Marcus Mitchell will be held on Saturday.

The presenters will represent three research groups with active research in the area of bio-control. This workshop promises to be thought-provoking with the aim of spending a substantial amount of the time on discussions. A detailed schedule of the workshop follows:

7:30      Dave Touretzky and A. David Redish summarize their
                cognitive neuroscience theory of rodent navigation with
                implications for hippocampal function, and its
                implementation on a mobile robot.

8:00      discussion period for Touretsky/Redish presentation

8:15      Thomas F. Enders and collaborators summarize their
		research efforts in using neural networks in the
                development of techniques for the scheduling, control,
                and on-line optimization of batch fermentation 

                processes (e.g. the alcoholic fermentation with yeast).

8:45      discussion period for Enders et al. presentation

9:00      panel/general discussion       

4:30      James S. Schwaber, Richard D. Braatz, Francis J. Doyle,
                Michael A. Henson, Martin Pottmann, and Babatunde 

                A. Ogunnaike summarize their research efforts in 

		developing novel process control techniques via
		inspiration from the cardiorespiratory reflexes.

5:00      discussion period for Schwaber et al. presentation

5:15      other workshop attendees present their work       

6:00      panel/general discussion

Machine Learning Approaches in Computational Molecular Biology

Organizers:

Pierre Baldi (pfbaldi@juliet.caltech.edu)
Soren Brunak (brunak@cbs.dth.dk)

Intended Audience:

Researchers interested in the application of neural and other statistical methods to problems in Molecular biology. A wealth of protein and DNA primary sequences is being generated by genome and other sequencing projects. Computational tools are increasingly needed to process this massive amount of data, to organise, compare and classify sequences, to detect weak patterns and similarities, to find and parse coding regions, to predict structure and function and reconstruct evolutionary trees. Sequence analysis problems have been tackled with classical statistical techniques, but also using artificial Neural Networks. Another trend in recent years, has been the casting of DNA and protein sequences problems in terms of formal languages using probabilistic automata, Hidden Markov Models and stochastic context free grammars. Machine learning techniques appear as a promising approach in this area.

This workshop will concentrate on the presentation and discussion of the most recent results on the application of machine learning approaches to problems in computational molecular biology. Emphasis will be both on methodological issues and biological relevance.

Morning session will be devoted to DNA/RNA problems-- afternooon session wil be devoted to protein problems.

7;30 Pierre Baldi
``Hidden Markov Models of Human Genes''

8:00 Soren Brunak
``Construction of Low Similarity Data Sets of Sequences
with Funtional Sites for Prediction Purposes''

8:30 Tim Hunkapiller
TBA

9:00 Anders Krogh
``Predicting Protein Secondary Structure with Structured Networks''
 
4:30 Paul Stolortz
"Links between statistical physics and dynamic programming:
applications to computational molecular biology"

5:00 Gary Stormo
``Neural Networks for the Identification of Functional
Domains Common to Multiple Sequences''

5:30 Niels Tolstrup 

``Neural Network Model of the Genetic Code''

6:00 Discussion

Novelty Detection and Adaptive System Monitoring

Organizers:

Thomas Petsche (petsche@scr.siemens.com)
Stephen J. Hanson (jose@learning.siemens.com)
Mark Gluck (gluck@pavlov.rutgers.edu)

Intended Audience:

Researchers and practitioners with an interest in failure detection and preduction.

Description

Unexpected failure of a machine or system can have severe and expensive consequences. One of the most infamous examples is the sudden failure of military helicopter rotor gearboxes, which lead to a complete loss of the helicopter and all aboard. There are many, more mundane, similar examples. The unexpected failure of a motor in a paper mill causes a loss of the product in production as well as lost production time while the motor is replaced. A computer or network overload, due to normal traffic or a virus invasion, can lead to a system crash that can cause loss of data and downtime.

In these examples and others, it can be cost effective to ``monitor'' the system of interest and signal an operator when the monitored conditions indicate an imminent failure. This is analogous to periodically glancing at the fuel gauge in your car to make sure you do not run out of gas.

An adaptive system monitor, therefore, is an adaptive algorithm that estimates the condition of the system from a set of periodic measurements. This task is typically complicated by the fact that the measurements are complex and high dimensional. Adaptation is necessary since the measurements will depend on the peculiarities of the system being monitored and its environment.

This workshop will focus on the use of novelty detection for the problem of system monitoring. A novelty detector is a device or algorithm which is trained on a set of examples and learns to recognize or reproduce those examples. Any new example that is significantly different from the training set is identified as ``novel'' because it is unlike any example in the training set.

The purpose of the discussion is to bring together researchers working on different real world monitoring tasks and those working on novelty detection algorithms in order to hasten the development of broadly applicable adaptive monitoring algorithms.

We expect presentations on several applications areas involving a variety of novelty detection algorithms:

7:30-9:00
Helicopter gearbox monitoring: Four presentations by Robert R. Kolesar
(ONR), Kourosh Danai (U Mass), Peter Kazlas (U Colorado, Boulder) and
Mark Gluck (Rutgers).

4:30-6:00
Engine and electric motor monitoring: Three presentations by Ken Marko
(Ford), Scott Smith (Boeing), and Thomas Petsche (Siemens).

Recognizing novelty in classification tasks: Two presentations by
Germano Vasconcelos (University of Kent) and Dimitrios Bairaktaris
(University of Stirling).

(This list of speakers is preliminary and subject to change.)

Anthropomorphic Speech Signal Processing

Hynek Hermansky (hynek@eeap.ogi.edu)
Misha Pavel (pavel@eeap.ogi.edu)

Intended Audience:

Practitioners in speech recognition, researchers interested in the form and role of end-organ models.

Description:

Biologically faithful front-ends for speech and image tasks have seemed an attractive alternative to more traditional engineering representations - if we choose neural paradigms for recognition, then why not to look to biology for representation? With the availablity of silicon implementations at reasonable cost, this enterprise would be expected to flourish.

Instead, more traditional representations often remain more effective. This workshop addresses why biologically faithful front-ends do not couple well to current neural-based recognizers. We will discuss biological front ends, and alternatives particularly representations based on traditional engineering practice but modified to include what is known about human perception. We will also consider in what circumstances biological front ends do offer an advantage, and explore what directions recognition technology must take to make better use of these models.

The Workshop will be oriented towards extensive discussions. Several potential participants have interests in presenting short talks to stimulate the discussions, among them:

7:30	Jont Allen (Bell Laboratories, Murray Hill)
"Speech Recognition with Human Face"

8:00	Andreou Andreas	 (Johns Hopkins University)
"Analog Auditory Models"

8:30	Malcom Slaney (Interval Research):	
"Correlograms"

9:00 Discussion


4:30	Nelson Morgan (International Computers Science Institute and U C Berkeley):
"Current Research in Stochastic Perceptual Auditory-event-based Models (SPAM) "

5:00	Chalapathy Neti	(IBM Watson Center):
"Neuromorphic speech processing for speech recognition in noisy environments."

Computational Role of Lateral Connections in the Cortex

Joseph Sirosh (sirosh@cs.utexas.edu)

Intended Audience:

Those interested in the computational significance of lateral connection patterns in the cortex.

Description:

Substantial recent evidence indicates that intracortical connections develop in an activity-dependent manner much like the afferent connections to the cortex. For example, the pattern of long-range lateral connections is closely coupled to the pattern of feature detectors in the visual cortex, and can be altered by strabismus and visual deprivation. Several possible functions have been suggested for the lateral connections. They may (1) modulate receptive field properties in a context-dependent manner and mediate perceptual filling in, (2) mediate adult cortical plasticity such as dynamic receptive fields, (3) store associatory information such as Gestalt rules, (4) act as the substrate for stimulus-dependent synchronization and feature binding, and (5) form the locus of perceptual learning in the primary visual cortex.

The workshop will focus on collating the open questions and hypotheses about the functional role of intracortical connectivity, and formulating an agenda for computational and analytical modeling. How do patterned lateral connections form and develop? What do the patterns of lateral connectivity tell us about information stored in the cortex? How could associatory information in the lateral connections be expressed during cortical processing? How could lateral connections mediate learning processes in the cortex? What is their role in cortical plasticity? What types of neural network models are best suited for addressing such questions?


7:30--7:55: Gary Blasdel        Correlation between patterns of activity and 
				lateral connectivity in primate visual cortex

7:55--8:20: Terrence Sejnowski "Physiological Effects of Intrinsic Horizontal 
                                Connections in Visual Cortex"
8:20--8:45: Jack Cowan          "Geometric Visual Hallucinations and Lateral 
                                Cortical Connections"
8:45--9:10: Dawei Dong		"ADD: Associative Dynamical Decorrelation---
				A Quantitative Theory of Self-Organization 
				and Function of Visual Cortex"

9:10--9:30: ***DISCUSSION***


4:30--4:55: Shimon Edelman:     "Computational models of 3D object 
                                representation in the visual cortex, and the 
                                possible role of lateral connections"
4:55--5:20: Jonathan Marshall   "Do lateral connections help stabilize 
                                perception during occlusion events?"
5:20--5:45: DeLiang Wang        "Lateral connections and coherent oscillations"

5:45--6:10: Joseph Sirosh       "Cooperative self-organization of lateral 
                                connections and feature detectors in the 
                                visual cortex"

6:10--6:30: ***DISCUSSION***


Friday / Saturday


Unsupervised Learning Rules and Visual Processing

Organizers:

Lei Xu (lxu@cs.cuhk.hk)
Zhaoping Li (zhaoping@cs.ust.hk)
Laiwan Chan (lwchan@cs.cuhk.hk)

Intended Audience:

Those interested in unsupervised learning algorithms and their relevance to visual processing.

Description:

There are three major types of unsupervised learning rules: competitive learning or vector quantization type, information preserving or Principal Component Analysis (PCA) type, and the self-organizing topological map type. All of them are closely related to visual processing. For instance, they are used to interpret the developments of orientation and other feature selective cells, as well as development of cortical retinotopic maps such as ocular dominance and orientation columns.

The development of the study of learning and the understanding of visual processing facilitate each other. Recent years, a number of advances have been made in both in the two areas. For instance, in the area of unsupervised learning, (1) numerous algorithms for competitive learning, PCA learning, and self-organizing maps have been proposed; (2) several new theories and principles, like maximum coherence, minimum description length, finite mixtures with EM learning, statistical physics, Bayesian theory, exploratory projection pursuit, and local PCA, have been developed; (3) theories for unifying various unsupervised learning rules (e.g., multisets modeling learning theory) have been explored. In the area of visual processing, more knowledge is being gathered experimentally about how visual development can be preserved or altered by neural activities, neural transmitters/receptors, and the visual environment etc, providing the bases and constraints for various learning rules and motivating new learning rule studies. In addition, there has been more theoretical understandings on the dependence of the visual processing units on the visual input environment, supporting the rationality of unsupervised learning.

The purpose of this workshop is twofolds: (1) to summarize the advances on unsupervised learning and to discuss whether these advances can help the investigation on visual processing system; (2) to screening the current results on visual processing and to check if they can motivate or provide some hints on developing unsupervised learning theories. The targeted groups of participants are researchers working in either or both the area of learning and the study of visual processing.

Friday
Lei Xu (chair)

7:30  "Time-Domain Solutions of Oja's Equations", 

John Wyatt and Ibrahim Elfadel (MIT)

7:50  "Kmeans Performs Newton Optimization", 

Leon Bottou (Neuristique Paris) and Yoshua Bengio (University of Montreal)

8:10  "Multisets Modeling Learning: An Unified Framework  for
 Unsupervised Learning", 

Lei Xu (The Chinese University of Hong Kong and Peking University)

8:30  "Information Theory Motivation For Projection Pursuit", 

Nathan Intrator (Tel-Aviv University)

8:50 "The Helmholtz Machine", 

Peter Dayan  (University of Toronto)

9:00 Discussion

Zhaoping Li (chair)

4:30 "Predictability Minimization And Visual Processing",  

Juergen Schmidhuber (Technische Universitaet Muenchen)

4:50 "Non-linear, Non-gaussian Information Maximisation: Why It's More Useful", 

Tony Bell (Salk Institute)

5:10 "Understanding The Visual Cortical Coding From Visual Input Statistics",
Zhaoping Li (Hong Kong University of Science and Technology)

5:30 "Formation Of Orientation And Ocular Dominance In  

Macaque Striate Cortex", Klaus Obermayer (Universitaet Bielefel)

5:50 "Putative Functional Roles Of Self-organized Lateral Connectivity In The Primary Visual  
Cortex",  Joseph Sirosh (University of Texas at Austin)

6:00 Discussion


Saturday

Laiwan Chan (chair)

7:30 "Density Estimation with a Hybrid of Neural Networks and Gaussian 

Mixtures",
Yoshua Bengio (University of Montreal)

7:50 "Learning Object Models through Domain-Specific Distance Measures"
Eric Mjolsness (UCSD) and Steve Gold (Yale University) 


8:10 "Auto-associative Learning of On-line Handwriting Using Recurrent
Neural Networks", Dit-Yan Yeung (Hong Kong University of Science and
Technology)

8:30 "Training Mixtures of Gaussians with Deficient Data", 

Volker Tresp (Siemens AG, Central Research)

8:50 "A Fast Method for Activating Competitive Self-Organizing Neural-Networks", 

George F. Harpur and Richard W. Prager (Cambridge University)

9:10 Discussion


Lei Xu (chair)

4:30  "Neuromodulatory Mechanisms For Regulation Of Cortical 

Self-organization", Michael E. Hasselmo (Harvard University)

4:50  "Learning To Cluster Visual Scenes With Contextual Modulation", 

Sue Becker (McMaster University)

5:10 "Invisibility in Vision: Occlusion, Motion, Grouping, and
Self-Organization" Jonathan A. Marshall (University of North Carolina
at Chapel Hill)

5:30 "A Comparative Study on Receptive Filters by PCA Learning and Gabor
Functions",  Irwin King and Lei Xu (The Chinese University of Hong Kong)


5:50 "Detection of Visual Feature Locations with a Growing Neural Gas Network"
Bernd Fritzke (Ruhr-Universitaet Bochum)

6:10 Discussion


Statistical and Neural Network Approaches to Natural Language Processing

Organizer:

Gary Cottrell (gary@cs.ucsd.edu)

Description:

Recently there has been a great deal of activity in the Computational Linguistics community in applying statistical techniques to large text corpora. These techniques have been used for word sense disambiguation, tagging of lexical items by their syntactic class, and for extracting frequent parse trees for faster parsing. At the same time, there has been a recognition among psycholinguists that statistical properties of sentences play an important role in the way that people process certain constructions.

Neural network models of natural language processing have mainly focused in recent years on lower-level processes, including learning of past tense constructions, pronunciation, and reading, although some approaches to parsing and learning of grammars have been attempted, with mixed results. In fact, the best results for larger grammars appear to have been achieved by hybrid approaches, while inductive learning techniques have been most successful on small, restricted grammars.

FRIDAY MORNING:

Introductions

  7:30 AM  Mitch Marcus: "Statistical approaches to NLP" 
	
  8:00 AM  Gary Cottrell: "Neural net approaches to NLP"

Learning fsa's and pda's

  8:30 AM  Lee Giles "Learning a class of large finite state
           machines with a RECURRENT network"
  8:50 AM  Sreerupa Das "Differentiable symbol processing and an
           application to language induction"

Machine translation

  9:10 AM  Patrick Juola and James Martin: "Extraction of Transfer
           Functions through Psycholinguistic Principles"

FRIDAY AFTERNOON:

Parsing

  4:30 PM  George Berg "Single Network Approaches to Connectionist Parsing"
  4:50 PM  Ajay Jain, "PARSEC: Let Your Network do the Walking, but Tell

           it Where to Go."
  5:10 PM  Stan Kwasny: "Training SRNs to Learn Syntax"
  5:30 PM  Risto Miikkulainen "Parsing with modular networks"

Discussion
 


  5:50 PM - 6:30 PM The assembled crew

SATURDAY MORNING

Word sense disambiguation/discovery/large text corpora

  7:30 AM  Hinrich Schuetze: "Unsupervised word sense disambiguation
           for improved text retrieval"
  7:50 AM  David Yarowsky "A comparison of word sense disambiguation
           algorithms"
  8:10 AM  Kevin Lund & Curt Burgess: "A model of high-dimensional 

           semantics from lexical co-occurrence"
  8:30 AM  Eric Brill "Statistical language processing: What are numbers
           good for?"

Discussion
 


  8:50 AM - 9:30 AM The assembled crew

SATURDAY AFTERNOON

Psycholinguistic modeling

   4:30 PM  Michael Gasser "Modular networks for language acquisition:
            Why and how"
   4:50 PM  David Plaut "Learning arbitrary and quasi-regular mappings
            in word reading with attractor networks"
   5:10 PM  Mark St. John "Practice makes perfect: The key role of
            construction frequency in sentence comprehension"
   5:30 PM  Kim Plunkett (unconfirmed), "Learning the Arabic plural:
            The case for minority default mappings in connectionist nets."

Discussion
 


  5:50 PM - 6:30 PM The assembled crew

Neural Networks in Medicine

Website

Organizer:

Paul E. Keller (pe_keller@.pnl.gov)

Intended Audience:

People active or interested in applying neural networks in medicine.

Description:

Health care reform has become a major national focus. Among the many issues that have surfaced in the current health care debate, neural networks have the potential of being most beneficial in improving reliability and lowering cost.

The neural network approach in medical information processing offers many advantages including:

The goal of this workshop is to investigate how neural networks can help improve the quality of health care and lower its cost. To accomplish this, the workshop will be a forum for researchers active in the field of medic al applications of neural networks to present their research and to participate in panel discussions. The panel discussions will be an opportunity for dialog among the workshop participants. Topics to be presented include pap smear analysis, cancer diagnosis, cancer screening, biomagnetic/bioeletric signal processing, image segmentation, control of cardiac chaos, and predict ion and control of glucose metabolism. Topics of discussion will likely include clinical testing, reduction of false-negatives, how automation can lower health care costs, and the process of receiving government approval for medical products and procedures that incorporate neural network technology.
Friday

7:30
Optimizing networks for Atlas guided segmentation of brain images
Anand Rangarajan, Yale University

8:00
Neural Net Analysis of Solitary Pulmonary Nodules
Armando Manduca, Mayo Foundation

8:30
Using Neural Networks for Semi-automated Pap Smear Screening
Laurie J. Mango, MD, and James M. Herriman, Neuromedical Systems, Inc.

9:00
Automated design of optical-morphological structuring elements for Pap smear
screening
J. P. Sharpe, R. Narayanswamy, N. Sungar*, H. Duke, R. J. Stewart, L. McKeogh 

and K. M. Johnson, University of Colorado at Boulder and *California 

Polytechnic State University

4:30
Comparing the prediction accuracy of statistical models and artificial neural 

networks in breast cancer
Harry B. Burke, MD, David B. Rosen, Philip H. Goodman, MD*, New York Medical 

College and *University of Nevada School of Medicine

5:00
Diagnosis of hepatoma by committee
Bambang Parmanto and Paul Munro, University of Pittsburgh

5:30 Panel Discussion


Saturday

7:30
Neural Networks for Nonlinear Processing of Biomagnetic/Bioelectric Signals
Martin Schlang, Michael Haft, and Ralph Neuneier, Siemens AG - Corporate 

Research and Development

8:00
Neural Networks Distinguish Demented Patients from Elderly Controls Based on 

EEG Recordings Beatrice A. Golomb, MD, and Andrew F. Leuchter, MD, UCLA 

Department of Medicine and UCLA Neuropsychiatric Institute

8:30 

Normal and Abnormal EEG Classification using Neural Networks and other 

techniques
Ah Chung Tsoi, University of Queensland

9:00
Issues in Controlling Cardiac Chaos
Gary W. Flake, Siemens Corporate Research

4:30
Prediction and Control of the Glucose Metabolism of a Diabetic
Volker Tresp, John Moody* and Wolf-R"udiger Delong, Siemens and *Oregon 

Graduate Institute

5:00
Experiences in using neural networks for detecting coronary artery disease
Georg Doffner, Austrian Research Institute for Artificial Intelligence and 

Department of Medical Cybernetics and AI, University of Vienna 



5:30 Panel Discussion

Advances in Recurrent Networks

Organizer:

Hava Siegelmann (iehava@ie.technion.ac.il)

Intended Audience:

Those enamored of, or frustrated with recurrent nets.

Description:

Unlike feedforward-acyclic networks, recurrent nets contain feedback loops, and thus give rise to dynamical systems. Theoretically, recurrent networks are very strong computationally. However, their dynamics introduces difficulties for learning and convergence.

This workshop will feature formal sessions, discussions, and a panel discussion aimed at understanding the dynamics, theoretical capabilities, and practical applicability of recurrent network. The panel discussion will focus on future directions of recurrent networks research.

Friday

Morning
Applications:  Mahesan Niranjan (chair)

Opening
Lee A. Feldkamp (Remarks on Time-Lagged RNN--Training and Applications)
Jerry Connor (bootstrap methods in time series prediction)
Paul Muller (Programmable Aanlog Neural Computer: Design and Performance)
Lee Shung (Learning with smoothing Regularization)
Manuel Samuelides (application: design of neuro-filters) 

Gary Kuhn  (application of sensitivity analysis)
Morten With Pederson (Training and Pruning)  


Afternoon
Neural Architectures: Feldkamp (Chair)

Paolo Frasconi (Learning and rule embedding)
Lei Xu / Chan (Mixture models and the EM Algorithm)
Hava Siegelmann (Towards a Neural Language: symbolic to analog)

General discussion


Saturday

Morning
Dynamics and Biology based: Baldi (Chari)

Pierre Baldi (Trajectory Learning Using Shallow Hierarchies of Oscillators.)
Mahesan Niranjan (Stacking Multiple RNN models of the Vocal Tract)
Kenji Doya (Problems concerning bifurcations of network dynamics)
Hugo deGaris (The CAM-Brain Project : Evolution of a Billion Neuron Brain )
Dawei Dong (associative dynamic decorrelation)

Afternoon
Fundamentals  Siegelmann (Chair)

Yoshua Bengio (on the problem with learning long-term dependencies)
Barak Pearlmuter (5 minute) 

	   (note on the alleged difficulty of learning over long time.)
Ricard Gavalda (on the Kolmogorov complexity of RNN)

Panel discussion

Saturday


Open and Closed Problems in Neural Network Robotics

Organizer:

Marcus Mitchell (marcus@hope.caltech.edu)

Description:

Many of the presumed tenets of neural computation -- nonlinearity, parallelism, adaptation, real-time performance -- suggest that a "neuromorphic" approach to robotics problems could succeed where previous approaches have failed. Further, the amazing motor performance of humans and animals provides additional arguments for the potential benefits of "a sideways look" towards neurobiology. Spurred on by these and other factors, researchers from a variety of backgrounds have produced almost 15 years of research intended to elaborate a biologically-inspired robotics. This workshop will ask the questions "What has been accomplished so far?" and "What is to be done next?"

For all the research attempts to apply neural network ideas to robotics, it is still difficult to get clear answers to questions like "Can you use a neural network to control a 6 d.o.f. arm?" or "Do reinforcement learning learning and dynamic programming methods get killed by the curse of dimensionality?" In addition, robotics is an area with a vast and intimidating "non-neural" literature which must be considered. The main goal of this workshop is to stimulate discussion about what problems have been successfully attacked and what the most important current open problems entail. A secondary goal of the workshop is to produce a short consensus list of problem descriptions and their status.

A complementary workshop, titled "Novel Control Techniques from Biological Inspiration", organized by Jim Schwaber et al., may be of interest to participants. None of the presentations in that session will be on robotics, and its main focus will be on nonlinear dynamical systems, e.g. in chemical processes and in neural systems. It is a one day workshop to be held Friday.

7:30 - 7:35     Opening Remarks
                Marcus Mitchell, Caltech

7:35 - 8:00     Why it's harder to control your robot than your arm:
                closed, open and irrelevant issues in inverse kinematics
                Dave Demers, UCSD

8:05 - 8:30     Open Problem:  Optimal Motor Hidden Units
                Terry Sanger, JPL

8:35 - 9:00     Neural Network Vision for Outdoor Robot Navigation
                Dean Pomerleau, CMU




4:30 - 4:55     Learning New Representations and Strategies 

                Chris Atkeson, Georgia Tech

5:00 - 5:25     A Semi-Crisis for Neural Network Robotics:
                Formal Specification of Robot Learning Tasks
                Andrew Moore, CMU

5:30 - 6:30     Closing Discussion

Neural Network Architectures with Time Delay Connections for Nonlinear Signal Processing: Theory and Applications

Organizers:

Andrew D. Back (back@elec.uq.oz.au)
Eric A. Wan (ericwan@eeap.ogi.edu)

Intended Audience:

Researchers interested in the role of nonlinear feedforward structures that integrate elements of linear signal processing as an alternative to recurrent nets.

Description:

Nonlinear signal processing using neural network models is a topic of recent interest in various application areas. Recurrent networks offer a potentially rich and powerful modelling capability though may suffer from some problems in training. On the other hand, simpler network structures which have an overall feedforward structure, but draw more strongly on linear signal processing approaches have been proposed. The resulting structures can be viewed as a nonlinear generalizations of linear filters. This workshop is aimed at addressing issues surrounding networks which may be viewed in a nonlinear signal processing framework, focussing in particular on those which employ some form of time delay connections and generally limited recurrent connections. We intend to consolidate some of the recent theoretical and practical results, as well as addressing open issues.

7:30-7:45
Opening Discussion - Andrew Back, University of Queensland

7:45-8:00
"Computational Capabilities of Local-Feedback Recurrent Networks"
Paolo Frasconi
University of Florence, Italy 


8:00-8:15
Issues in Representation: Recurrent Networks as Sequential Machines
C. Lee Giles and B.G. Horne
NEC Research Institute

8:15-8:30
"Properties of Recursive Memory Structures"
Jose C. Principe
University of Florida

8:30-8:45
"A Local Model Net Approach to Modeling Nonlinear Dynamic Systems"
Roderick Murray-Smith 

MIT


8:45-9:15  Open forum: 5 minute presentations^{*} by participants


9:15-9:30  Question Time and Discussion


4:30-4:45 

"A Spatio-Temporal Approach to Visual Pattern Recognition"
Lokendra Shastri
ICSI

4:45-5:00 

"The Performance of Recurrent Networks for Classifying Time-Varying Patterns"
Tina Burrows and Mahesan Niranjan 

Cambridge University Engineering Department

5:00-5:15 

"Nonlinear Infomax With Adaptive Time Delays"
Tony Bell
The Salk Institute

5:15-5:30
"The Sinc Tensor Product Network"
Jerome Soller
University of Utah

5:30-5:45
"Discriminating Between Mental Tasks Using a Variety of EEG Representations"
Chuck Anderson
Colorado State University


5:45-6:00  Open forum: 5 minute presentations^{*} by participants


6:00-6:30  Question Time and Closing Discussion


* Free time slots for 5 min 'soap-box' talks will be available at the
  workshop.

Algorithms for High Dimensional Space: What Works and Why

Organizer:

Michael P. Perrone (mpp@watson.ibm.com)

Intended Audience:

The workshop is targeted on researchers interested in both theoretical and practical aspects of improving network performance.

Description:

The performance of certain regression algorithms is robust as the dimensionality of the data and parameter spaces are increased. Even in cases where the number of parameters is much larger than the number of data, performance is often robust. The central question of the workshop will be: What makes these techniques robust in high dimensions?

High dimensional spaces have (asymptotic) properties that are nonintuitive when considered from the perspective of the two- and three-dimensional cases generally used for visual examples. Because of this fact, algorithm design in high dimensional spaces can not always be done by simple analogy with low dimensional problems. For example, a radial basis network is intuitively appealing for a one dimensional regression task; but it must be used with care for a 100 dimensional space and it may not work at all in 1000. Thus having a familiarity with the nonintuitive properties of high dimensional space may lead to the development of better algorithms.

We will discuss the issues that surround successful nonlinear regression estimation in high dimensional spaces and what we can do to incorporate these techniques into other algorithms and apply them in real-world tasks. The workshop will cover topics including the Curse of Dimensionality, Projection Pursuit, techniques for dimensionality reduction, feature extraction techniques, statistical properties of high dimensional spaces, local methods and all of the tricks that go along with these techniques to make them work.




7:30  "Statistical Properties of High Dimensional Spaces"
      Michael Perrone (IBM T.J. Watson Research Center)

8:00  "Computational Learning and Statistical Prediction"
      Jerome Friedman (Stanford University)

8:30  "Discriminant Adaptive Nearest Neighbor Classification"
      Trevor Hastie and Rob Tibshirani (Stanford University)

9:00  "Local Methods in High Dimension: Are They Surprisingly Good But Miscalibrated?"
      David Rosen (New York Medical College)

Afternoon
---------

4:30  "Is There Anything Positive in High Dimensional Spaces?"
      Nathan Intrator (Tel Aviv University)

5:00  "Three Techniques for Dimension Reduction"
      John Moody (Oregon Graduate Institute)

5:30  "A Local Linear Algorithm for Fast Dimension Reduction"
      Nandakishore Kambhatla (Oregon Graduate Institute)

6:00  "Fuzzy Dimensionality Reduction"
      Yinghua Lin (Los Alamos National Lab)


Doing It Backwards: Neural Networks and the Solution of Inverse Problesm

Organizer:

Chris M. Bishop (bishopc@helios.aston.ac.uk)

Intended Audience:

Researchers and practitioners in neural computing interested in inverse problems.

Description:

Many of the tasks for which neural networks are commonly used correspond to the solution of an `inverse' problem. Such tasks are characterized by the existence of a well-defined, deterministic `forward' problem which might, for instance, correspond to causality in a physical system. By contrast the inverse problem may be ill-posed, and may exhibit multiple solutions.

A wide range of different approaches have been developed to tackle inverse problems, and one of the main goals of the workshop is to contrast the way in which they address the underlying technical issues, and to identify key areas for future research. Ample time will be allowed for discussions.

7:30 "Welcome and overview" Chris Bishop (Aston)
7:35 "From ill-posed problems to all neural networks and beyond 

      through regularization" Tomaso Poggio / Federico Girosi (MIT) 

7:55 "Solving inverse problems using an EM approach to density 

      estimation" Zoubin Ghahramani (MIT) 

8:15 "Density estimation with periodic variables" Chris Bishop (Aston) 

8:35 "Doing it forwards, undoing it backwards: high-dimensional
      compression and expansion" Russell Beale (University of Birmingham) 

8:55 "Inversion of feed-forward networks by gradient descent"
      Alexander Linden (Berkeley)
9.15 Discussion

4:30 "An iterative inverse of a talking machine" Sid Fels (Toronto) 

4:50 "Diagnostic problem solving" Sungzoon Cho (Postech, S Korea) 

5:10 "Multiple Models in Inverse Filtering of the Vocal Tract" 

      M Niranjan (Cambridge) 

5:30 "Goal directed model inversion" Silvano Colombano (NASA Ames)
5:50 "Predicting element concentrations in the SSME exhaust plume"
      Kevin Whitaker (University of Alabama)  

6:10 Discussion

Locomotion, Central Pattern Generators, and Sensorimotor Control

Organizers:

Bard Ermentrout (bard@mthbard.math.pitt.edu)

Description:

This workshop addresses problems in locomotion, central pattern generation and sensorimotor control. Topics in central pattern generation include control of relative phases of individual oscillators controlling locomotive patterns, and network models of pattern generation based on intracellular recordings. Computational models of sensorimotor systems based on anatomically defined neural circuitry, as well as synthetic approaches to sensorimotor behavior (simulated artificial ``animals") will be discussed. Models of vocalization, both in birds and in the development of human speech provide a look at modeling higher-level function.
Morning

``Control of Phase-lags in Central Pattern Generator''
Bard Ermentrout, Department of Mathematics
University of Pittsburgh, Pittsburgh, PA 15260 

(bard@mthbard.math.pitt.edu)

``A Model for the Locomotor Network in Lampreys''
James T. Buchanan, Department of Biology, Marquette
University, Milwaukee, WI 53233
(6231buchanan@vmsf.csd.mu.edu)

``Artificial Life Approaches to Building Sensorimotor Systems''
Peter M. Todd, Department of Psychology, University of Denver
(ptodd@smith.rowland.org)
			 

			 

Afternoon
			     

``How Can a Bird Sing a Song It Heard?''
Kenji Doya (in collaboration with Terry Sejnowski)
ATR Human Information Processing Research Laboratories
Howard Hughes Medical Institute
The Salk Institute (doya@salk.edu)

``Modeling Chemotaxis in the Nematode C. elegans'',
Shawn Lockery, University of Oregon and Steve Nowlan, Synaptics
(nowlan@synaptics.com)
			     

``Sensorimotor Integration in a Computational Model of the
Articulatory and Phonetic Foundations of Childhood Phonology''
Kevin L. Markey, Department of Computer Science, University of
Colorado, Boulder(markey@tigger.cs.colorado.edu)



Please note that this NIPS World Wide Web tree is under construction. We regret any inconvenience for links that are not available yet. We have attempted to ensure that all information is correct, but we cannot guarantee it. Please send comments and corrections to:
L. Douglas Baker
Carnegie Mellon University
ldbapp+nips@cs.cmu.edu