To be published in the 'Handbook of Brain Research and Neural Networks', M.A. Arbib (Ed.), MIT Press, in preparation for 1995.

                     This is a draft (November 1993). Comments are very welcome.

                                           Neurosimulators


                                          Jacob M.J. Murre

                                         MRC APU, Cambridge





1. Introduction

Since 1985 more than a hundred research groups have developed some form of simulator
for neural networks. In this chapter, we will briefly review a selection of about 40 of
these. A neurosimulator could be described as 'a software package created for the specific
purpose of reducing the time and effort involved in solving a problem using neural
networks'. Apart from saving time, it can also increase the reliability of the simulations, if
these are based on standard neural network paradigms (e.g., variants of backpropagation).
As we shall discuss below, when developing a completely new paradigm the advantages of
using one of the existing neurosimulators are less straightforward. 
           A neural network simulation may be concerned with such widely diverging topics
as the effect of synaptic density on a single neuron or the convergence behavior of a large
artificial neural network for optimizing the layout of computer chips. Even though the
range of problems is extremely diverse, most simulations share a common structure. The
following sequence of subtasks occurs in many simulation problems: (1) Translation of the
problem into a neural network representation. This always involves deciding upon data
representation, data presentation scheme, and model architecture. It may also include the
selection of a set of data that adequately represents the domain of the problem. If this set
is too restricted, the learned behavior of the network will not generalize well. (2) Testing
the validity of the decisions made in phase 1. Are the data processed in the intended
manner? Is the network architecture well-constructed? Does the model roughly behave as
expected? (3) The actual running of the simulations. Depending on the size of the
network, the complexity of the algorithms, and on the size of the training data, this may
take up to several days or longer. Unsatisfactory results may take the user back to phase 2.
(4) Analyzing the results. How has the network achieved the results? How well does the
system generalize? How sensitive is the system to perturbations? (5) Extending the neural
network paradigm used. This phase occurs more often than not, especially if a completely
new problem is tackled. Special-purpose extensions to the paradigm may be necessary, or
a completely new paradigm may have to be developed. After phase 5, the user must again
return to phase 2. (6) Incorporating the completed model in applications, either as a stand-
alone program (e.g., for demonstrations) or as part of a larger system (e.g., a pattern
recognizer as part of a manufacturing system). Even pure research cannot always escape
this phase, for example, when one decides to interface a neural network with an expert
system or with a genetic algorithm. In all of these cases, it is crucial that the simulator
allows detailed and efficient communication between the various software modules used.
           Each of the stages above can potentially benefit from the use of a simulation tool,
although in practice many neurosimulators concentrate on improving efficiency in only a
few of these phases. Phase 1 could, for example, be made easier by a pattern editor,
especially if it allows importing data from programs such as spreadsheets. It would also
help, if the system allowed for a variety of architectures to be specified (i.e., layers,
modules with different initial connections) and if it included some sort of simulation script
facility. A simulation script facility is of particular importance when simulating
complicated sequences of events such as often found in modelling in artificial intelligence
and psychology. Phase 2 requires a highly developed interface that allows direct inspection
and manipulation of all important simulation parameters. For phase 3, sheer speed is the
most important factor. In case of large-scale simulations, use of parallel hardware is
indispensable. In phase 4, it is often necessary to do statistical analyses on the output
parameters, for example, to investigate the nature of the hidden-layer representations in a
backpropagation network. Various diagnostic facilities can greatly speed up this phase.
Extendibility (phase 5) is one of the most difficult problems to tackle in simulator design.
This problem is discussed in some detail in the last section. Incorporation into a larger
system (phase 6) can be achieved through the generation of source code that can be
compiled and linked to the other software, or through a set of communication routines that
enable the exchange of data with other programs.
           A neurosimulator will be most efficient in practice, if it aims to assist which each
of the stages. Because no simulator at present accomplishes this, it is worthwhile to for the
user to evaluate the simulation profile and to decide what phases are likely to be most
prominent. The choice of simulator will, furthermore, be influenced by the needs of the
specific user group. We will review a selection of neurosimulators according to the
following main categories of users: introductory simulators for novices, simulators for
biological single-neuron simulations, systems available from the academia, and a variety of
commercial systems developed mainly for business and industry. 
           The field of neurosimulators is moving very fast. In the past couple of years, I
have assembled references to about 100 neurosimulators. Many more are undoubtably
being developed at this moment. It is impossible to discuss a significant portion of these
systems in any detail. I, therefore, ask the reader to keep in mind that the overview
presented below is not intended to be a complete consumer's guide to current
neurosimulators. The aim is merely to discuss important issues in the choice and design of
neurosimulators, and to give some hints as what simulators may be suitable for what
purpose. A list of the e-mail addresses (or phone numbers) has been included in Table I.
More detailed information can be obtained from these sources.


2. Neurosimulators for novices

A novice in neural networks needs to develop a feeling for the behavior of neural
networks. Probably the best way to achieve this is by running a few networks on a
computer and watching the network parameters evolve on the screen. Neurosimulators are
an excellent way to rapidly explore the many different paradigms available, such as
backpropagation, Hopfield networks or Kohonen networks (self-organizing maps). Most of
the packages mentioned in the section on business and industry are well-suited for novices.
They are also, however, rather expensive, especially if the only objective is getting to
know the subject, rather than, say, trying to predict the stock market. Some low-priced
introduction packages are developed to be used in conjunction with a book. Often these
are merely demonstration programs (Table [1]), but others, such as the PDP simulator
(Table [2]), are sufficiently powerful and easy to use to allow for more complicated
simulations to be carried out. Users more experienced in computer programming and
simulation might also try out some of the systems mentioned in the next section,
especially those that have a well-developed graphical interface.


3. Neurosimulators available from the academia

It seems that by now every university-based neural network group has developed its own
neurosimulator. This is a particularly advantageous situation since almost all of these
systems are available for a nominal fee. Most of the neurosimulators developed by the
academia are suited for users who already have some experiences with simulation.
Documentation of these systems may - but need not - be less extensive than of the more
expensive commercial packages. The number of supported computer platforms is often
limited (usually Unix) and user interaction and user support tends to be less well
developed than the commercial systems. On the positive side, they are likely to include the
most recently developed methods in neural network research and source code is often
available. Examples from Table I [3-12] are: Mactivation (excellent graphics), GRADSIM
(many backpropagation variants), MUME (supports several computer platforms), SNNS
(supports many neural network paradigms, including ART1 and ART2), PlaNet,
NeuralShell, RCS, and GENESIS/XODUS (the latter is also well-suited for biological
modelling, see next section). Some groups base most user-interaction of their simulator on
a neural network simulation language, for example, Maryland MIRRORS/II, SLONN, and
NSL. This is only a small selection of the available systems that seem to be most widely
used at the moment. Several promising neurosimulators are currently under development
(also see below), so that it is worthwhile to ask around whether any new systems are
being made available.
           For the experienced programmer, one option is to obtain one of the available neural
network libraries Table I [13-16]. Developers of novel neural network paradigms often
provide a software library for a nominal fee to enable others to try out their methods. The
programs are usually written in portable C source code (see Blum, 1992, for an
introduction to programming in C++ for neural networks). These libraries differ from the
packages mentioned above in that the user is expected to develop most of the functionality
of the simulation by writing additional source code. The advantage of this approach is that
it is easy to alter and extend the algorithms. The disadvantage is usually that despite
support of the libraries, programming can still be time-consuming. Several of the more
extensive packages, however, can also be used by writing a program in some higher-level
neural network language. Because some of these languages are very similar to C or Basic,
for the experienced programmer the task of writing C source code to be used with
software libraries may not be too different from using a more sophisticated simulator.


4. Biological modelling

About ten neurosimulators have been developed for single-neuron modelling. These allow
exploration of single-neurons and of small networks. Real neurons are much more
complex than the single-parameter nodes employed in most artificial neural networks (i.e.,
using a single activation value per node). In a real neuron, electrical activity spreads
gradually through the dendritic branches. Its computational characteristics depend on the
structure and electrochemical properties of these branches. To model this in biological
simulators a neuron is divided into a number of compartments. At all positions inside a
single compartment, the activity is assumed to be equal. A complete neuron is built up out
of many such isopotential compartments, like a stick-figure. In Table I, seven single-
neuron simulators are listed [17-23], which are all based on such compartmental
modelling. For a more thorough review of these simulators the reader is referred to De
Schutter (1992; also see Miller, 1990). In that review, the reader will find information
about what aspects of the biology can be handled by each simulator (ion channels, synapse
plasticity, etc.) as well as many other details. I shall here confine myself to mentioning a
few general aspects. 
           SABER and SPICE are commercial simulators that can also model other dynamical
systems (SPICE is also available in a basic, non-commercial version). The user-interface
of simulators [17-18,20-21] is supported by graphics or dialogues, the others rely on
interaction through scripts or file input. GENESIS and NEMOSYS seem to offer the most
flexible interaction in phase 2 of the simulation process. GENESIS supports simulations
with large neural networks as well. It is, furthermore, the only software for biological
simulations that supports a parallel hardware platform. Because running networks of many
biological neurons takes a very large amount of computer time, parallel hardware is no
luxury for this task. SABER and SPICE are also able to run networks of several neurons
but lack parallel support. A disadvantage in using [17-20] may be that documentation is
minimal for these systems (at least at the time of writing this chapter), although some
have on-line help [18,20] or include a tutorial [18-19]. Important for possible extensions is
whether source code is available, which is the case for five of the simulators [17-20,23].
Simulators [18-20,23] allow some user extensions in other ways as well. 


5. Business and industry

A large proportion of the problems in business and industry fall in a limited number of
categories, such as credit assessment, signal analysis, time-series prediction, pattern
recognition, and process control. Because the solutions can often be based on standard
neural network technology, neurosimulators may be particularly useful for these
applications. Most commercial packages have excellent documentation. Special training
courses are often available, and companies that market these systems can usually assist in
every stage of the solution process. Not surprisingly, user-interaction of commercial
systems tends to be geared better to the needs of someone working in business or industry.
NEURAL DESK, for example, allows a neural network to be run from within a spreadsheet
or other software package (e.g., Excel or Superbase), without having to leave the current
program. Many commercial systems offer pre-defined example problems that can be
amended to the problem at hand. 
                     In Table I, I have listed a few commercially available neurosimulators [24-37] that
will be briefly discussed here. NeuralWorks Professional II/Plus is a widely used system
that offers a range of built-in network paradigms, display modes, and monitor options. It
also includes possibilities for exchange with other popular programs (database and
spreadsheets). It is not possible to add new paradigms, but with a separate utility, the
Designer Pack, networks can be converted into C source code. This code may then be
altered further to suit the needs of the user. Ideally, extension with a new paradigm should
not lead to loss of most of the functionality offered by the interface, as is the case with
this system. A code-generator can be a necessity, but it does not fully solve the problem.
Systems comparable to NeuralWorks Professional II/Plus are HNC NeuroSoft and SAIC
ANSim (also see the review of these systems by Cohen, 1989). To extend the applicability
of NeuroSoft, HNC sells a network description language called AXON. SAIC provides
ANSpec, a network specification language based on concepts from concurrency. Though
the latter utility allows smooth implementation on parallel machines (see discussion
below), it may be difficult for the someone unfamiliar with parallel processing. 
           Instead of taking the indirect route code generation, some simulators include a
network specification language and a compiler. NEUNET is one example of this approach.
It is derived from package for dynamical systems called DESIRE (Direct Executing
SImulation in REal time). A neural network simulation is programmed in a BASIC-like
language with support for operations with arrays, matrices and complex numbers. Like
BASIC, the code is interpreted during the testing phase (phase 2). For phase 3, a special
runtime compiler is called to generate efficient machine code. The result of this is that
execution can be fast, while extensibility is fully guaranteed. The disadvantage is that the
user has to do extensive programming to make a simulation work. Built-in support for
monitoring the network during a simulation is limited. Several other system include higher
network languages. Mimetics' Mimenice is a modular system based on a higher-level, C-
like language called G. Mimetics also offers a utility to generate C-code. Mimenice
focusses on just three paradigms: backpropagation, learning vector quantization and
topological maps. The philosophy behind this is that these networks cover a large range of
useful applications, and users can always write their own learning schemes. 
           Three other companies share Mimetics' philosophy that only a few algorithms are
useful for industrial applications. Nestor is one of the oldest neural network companies. It
has been active in developing applications for industry since 1983. Their simulator is
called the Nestor NDS (Nestor Development System). It allows the user to build networks
using the learning algorithm developed by the founders of Nestor (this algorithm is
protected by patents). Another single-paradigm simulator is Brainmaker, which limits the
user to backpropagation. NEURAL DESK is a simulator that consists out of three separate
tools that allow easy interaction with existing standard software such as spreadsheets. The
number of paradigms is limited to four.
           The interfaces of the above systems are mostly based on dialogues and edit-
compile cycles. MacBrain and Plexi are two systems that offer excellent graphics-based
direct manipulation of networks. Networks can be put on the screen by clicking and
drawing groups and by connecting these with higher-order connections. In a similar vein,
graphs can be attached to nodes, weights, etc., in order to monitor processing. Plexi's
range of paradigms can be extended by programming and recompiling (incrementally)
using Symbolic's object-oriented development environment. MacBrain can be extended a
variety of ways, including importing external C routines. Neuristique's SN2.8 also offers
excellent graphical and statistical facilities. Its Lisp based code can be extended or altered
for high-level modelling. Cognitron is system developed primarily for Macintosh
computers, but with less emphasis on graphical interaction than Plexi or MacBrain. Like
SN2.8, new activation and learning rules can be added by programming in Lisp. The
system is also able to export Lisp code. Cognitron offers support for a variety of parallel
machines, including transputers. In order to handle paradigms that require sequenced
processing of layers like backpropagation, a special phasing option has been included that
allows the user to specify the order in which layers must be processed.
           ANNE is a simulator/library developed for Intel's Hypercube. It has now become
part of the User Libraries. Although it is mainly focussed on backpropagation, the
simulator can be extended quite easily. OWL is a good example of a commercially
available library (for C). 


6. Current problems and future developments

The tools available for neural network simulations are still falling short of the standards
achieved in more mature fields such as statistics or computer-aided design. The
neurosimulators mentioned above are most useful for prototyping small to mid-sized
standard applications. Especially, novice users and those interested in applying neural
network technology to practical problems in business and industry can gain a lot from
using such a system. This was already concluded by Cohen in 1989 and little has changed
in the mean time. Neural network simulators certainly offer many possible advantages.
When applying standard algorithms, using a neurosimulator may save a lot of time.
Indeed, by enforcing a uniform representation that is accepted and used by many
researchers, neurosimulators may be instrumental in realizing important standards in neural
network technology. Users may prefer to use a neurosimulator in order to safeguard
themselves against making mistakes when writing software for standard algorithms. For
the connectionist researcher, however, the tools discussed so far still seem to fall short of
what is required. Of the roughly 300 researchers in European universities listed in the
DEANNA database, only 34 say to ever use a neurosimulator tool. Why is it that more
than 85% still write their own simulation software?
           For research purposes it is necessary to have a neurosimulator that has (1) a highly
developed interface, (2) a scalable design (i.e., through parallel hardware), and that is (3)
extensible with new neural network paradigms. An excellent interface is necessary to
support simulation in the problem definition and testing phases. Scalability means that the
system is able to handle even very large networks. The support of suitable parallel
hardware will be necessary to achieve this. Extensibility implies that it is both easy to
alter the paradigms provided with a system and to develop completely new paradigms.
Unfortunately, the requirement of extensibility clashes with both the scalability
requirement and with the interface requirement. It appears to be very difficult to design a
system with a sophisticated interface in which new paradigms can be 'implanted' without
either major programming efforts (possibly involving recompiling the full source code of
the package) or loss of full interface support (e.g., your newly defined paradigm appears
on the screen as a white box and it is impossible to inspect what is going on inside).
Similarly, support of parallel hardware usually requires explicit programming of parallel
routines for each neural network defined, which can result in programs that are extremely
hard to debug (Murre, 1992, Appendix B2).
           One approach to solving the extensibility problem is by using a higher-level neural
network language (see above). For many applications this approach is attractive. If the
networks have complicated architectures, however, or if the data used have an obvious
visual representation, direct graphic-interaction may be more important. A good way to
gain an understanding of how a newly defined network paradigm behaves is by observing
it under a variety of circumstances on the screen. Monitoring and manipulation of essential
parameters, such as weights and activations, can be very helpful in this phase. Ideally, it
should be possible to move back and forth smoothly from graphical interaction to
specification in some higher-level network language. Current simulators that emphasize
graphical interaction, however, cannot be extended easily.
           One system that aims to solve the above three problems is GALATEA (Table 1
[38]), under development at Mimetics (also supported by ESPRIT). It combines a neural
network environment with a higher level language called N (see Marcade, 1992; N is an
extension of C++). With a 'System Architecture Builder' the user can define new
networks. Newly defined networks are represented in such a way that they can be mapped
onto a variety of parallel hardware. The system will also include a silicon compiler for the
generation of specifications for application-specific integrated circuits (ASICs). A similar
approach to parallelism and extensibility is taken in the MetaNet system (Table I [39],
developed at Leiden University). MetaNet strongly emphasizes direct manipulation of
neural network objects (Murre, 1992, Appendix B5). Different users will have different
needs for interaction with their networks. These specific needs are served by 'Problem-
Oriented INteraction Tools'. A user can not only define a new network algorithm, but also
the way in which it appears and behaves on the screen. This allows each user to tailor the
system to specific tasks (i.e., using different POINTs of view) and to manage the level of
detail of the displays. Like in GALATEA, new network paradigms can be defined without
loss of direct manipulation and graphical display. 
           Both GALATEA and MetaNet are based on a building-block philosophy, also called
modularity. This approach fits the design intuitions of most network developers. It also
leads to well-organized systems that promote re-use of existing modules. Modularity at the
interface level is usually mirrored by an object-oriented approach at the formal, internal
level of a simulator. A system that takes a fully object-oriented approach to simulation is
SESAME (Table I [40]), which is under development at the German National Research
Center for Computer Science. SESAME's design is more akin to that of general simulator.
Building blocks can be defined in different classes: Mathematical, Utility (e.g., I/O
functions) and Graphical. An interesting feature of the system is that it allows
autoconfiguration of parameters via a constraint satisfaction process (Linden et al., 1993).
Parameters that are unknown are inferred in an iterative process of spreading information
throughout the structure of interlinked building blocks. In this way, also erroneous design
or missing information can be detected. Other general simulators based on building-blocks
are Khoros and Simulink (MathWorks) (Table I [41-42]). XERION (Table I [43]) is another
example of a general simulator. On the one hand, the user may have to encode the specific
neural network algorithms used. But, on the other hand, a general simulator may offer
excellent support for managing the simulations. Especially in cases where a neural network
is tested on some simulated control tasks, such as a simulated robot arm (i.e., as opposed
to a real arm), a general simulator may offer advantages over a specialized neural network
simulator, because the control dynamics of the arm can be programmed in the same
system.
           Neural networks are often portrayed as massively parallel systems. In reality,
however, parallel implementation of neural networks is not as straightforward as implied
by this terminology. Even in the popular backpropagation algorithm, for example, the
different layers have to processed in a specific order. In complicated architectures with
many layers or modules (e.g., for different subtasks) this processing order is a strong
constraint on achievable parallelism. The implication of this may seem to be that, in case
of newly defined neural networks, it is impossible to provide automatic support for
parallel hardware. This is not necessarily true, however, if in the definition of the neural
network paradigms the data dependencies are indicated (cf. dataflow approaches to
computing). In backpropagation, for example, processing a given layer depends on the
completion of processing the layers that feed into it. In other networks, however, all
neurons - and hence all layers - can be updated in parallel. From a paradigm definition
and a specified network architecture, a 'mapping component' of a neurosimulator may
derive an optimal parallel implementation for the specific network architecture on one of
the parallel machines supported (Marcade, 1992; Murre, 1992). It has so far proven to be
impossible to carry out automatic parallelization of computer algorithms in general. The
domain of neural networks, however, may be sufficiently constrained to make such an
undertaking feasible. Moreover, neural networks may themselves be used as techniques to
optimize the mapping. The resulting a process is very nearly a neural bootstrap. A great
challenge to neurosimulator design is to support parallelism without burdening the user
with the cumbersome task of parallelizing explicitly neural network algorithms.


Acknowledgements
The author would like to thank the many researchers who provided invaluable information
when preparing this article. He is especially indebted to Erik De Schutter, Chuck Joyce, 
Steven Kleynenberg and Alexander Linden. This work is supported by the Medical
Research Council.

Author note
The author's address is MRC Applied Psychology Unit, 15 Chaucer Road, Cambridge
CB2 2EF, United Kingdom, e-mail: jaap.murre@mrc-apu.cam.ac.uk.


References

Textbook treatments

               Blum, A. (1992). Neural networks in C++. An object-oriented framework for building
           connectionist systems. New York: Wiley.

Review articles

               Cohen, H. (1989). How useful are current neural network software tools? Neural Network
           Review, 3:102-113.

                                   De Schutter, E. (1992). A consumer guide to neuronal modelling software. Trends in
           Neuroscience, 15:462-464.

               Miller, J.P. (1990). Computer modelling at the single-neuron level. Nature, 347:783-784.

Research contributions

               Linden, A., Sudbrak, Th., Tietz, and Ch., Weber, F. (1993). An object-oriented framework
           for the simulation of neural nets. In: C.L. Giles, S.J. Hanson, and J.D. Cowan
           (Eds.). Advances in neural information processing systems 5. San Mateo, CA:
           Morgan Kaufmann.

               Marcade, E. (1992). ESPRIT II Project 5293 - GALATEA User Manual (draft version).
           Mimetics, France.

               Murre, J.M.J. (1992). Learning and categorization in modular neural networks. Hemel
           Hempstead: Harvester Wheatsheaf (UK), and Hillsdale, NJ: Lawrence Erlbaum
           (USA).Table I. Some details of the neurosimulators mentioned in the text. The name of each
simulator is followed by: the developer or distributer, an e-mail address or phone/fax
number, major hardware platforms supported, indication of the paradigms supported
(several means that between two and seven paradigms are supported, many means that
more than seven are supported), and a price indication:

           *       free or nominal charge (+C means source code in C is included)
           $       20-200$
           $$      200-2000$
           $$$     2000-10000$

Neurosimulators that accompany a book
[1] Simulator CORTEX I is best used in conjunction with the book by I. Alexander and H.
Morton (1990). An introduction to neural computing. Chapman and Hall. The simulator
has to be bought separately and can also be used on its own. Several paradigms, $. A new
and more sophisticated version is currently being developed by Michael Reiss: Cortex III,
Unistat, UK, tel. +44 81-9641130, PC, many, $$$.
[2] The PDP Simulator included in the book by J.L. McClelland and D.E. Rumelhart
(1988). Explorations in parallel distributed processing. Cambridge, MA: MIT Press. Mac,
PC, Unix, several paradigms, $. A new, more general version of this system is being
developed by McClelland's group at Carnegie Mellon University. The new PDP Simulator
will be based on object-oriented design principles (C++).

Neurosimulators from the academia
[3] Mactivation, University of Colorado at Boulder, mikek@boulder.colorado.edu, Mac, *.
[4] GRADSIM, University of Toronto, watrous@ai.toronto.edu, Sun, VAX, others, many,
*+C.
[5] MUME 6.0, Sydney University, fax: +61 2/660-1228, Unix/Ultrix, PC, others, many,
*+C.
[6] SNNS (Stuttgart Neural Network Simulator), University of Stuttgart,
zell@informatik.uni-stuttgart.de, Unix/Ultrix/X-Windows, many, *+C.
[7] PlaNet, Yoshiro Miyata, University of Colorado at Boulder,
miyata@boulder.colorado.edu, Unix/X-Windows/SUN Tools, many, *.
[8] NeuralShell, Ohio State University, sca@dopey.eng.ohio-state.edu, Unix, many, *+C.
[9] RCS (Rochester Connectionist Simulator), University of Rochester, cs.rochester.edu,
Unix, Mac, *.
[10] MIRRORS/II, University of Maryland, mirrors@cs.umd.edu, Unix, *.
[11] SLONN, University of Southern California, alfredo@usc.edu, Unix, *+.
[12] NSL, University of Southern California, alfredo@usc.edu, Unix, *+.

Software libraries from the academia
[13] SOM_PAK and LVQ_PAK by Kohonen's group at the University of Helsinky,
som@cochlea.hut.fi for SOM_PAK and lvq@cochlea.hut.fi for LVQ_PAK, SOM_PAK:
self-organizing map, LVQ_PAK, learning vector quantization, *+C.
[14] CasCorl (Cascade Correlation Simulator in Lisp and C) and Quickprop by Scott
Fahlman, fahlman@cs.cmu.edu, *+C/Lisp.
[15] tlearn, University of California at San Diego, elman@ucsd.edu, Unix, PC, Mac,
backpropagation and variants, *+C.
[16] CALMLIB (library for Categorizing And Learning Module), Jacob Murre, MRC APU,
jaap.murre@mrc-apu.cam.ac.uk, CALM, *+C.

Single-neuron simulators
[17] AXONTREE, Y. Manor, Hebrew University, Jerusalem, idan@hujivms.bitnet, Unix/X-
Windows, *+C.
[18] GENESIS/XODUS, Jim Bower, Caltech, genesis@caltech.bitnet, Unix/X-Windows,
*+C.
[19] NEURON, Duke University Medical Center, hines@neuro.duke.edu, Unix, PC, others,
*+C.
[20] NEMOSYS, UC Berkeley, eeckman@mozart.llnl.gov, Unix/X-Windows, *+C.
[21] NODUS, Erik De Schutter, Caltech, erik@cns.caltech.edu, Mac, *.
[22] SABER, Analogy Inc., USA, tel: +1 503/626-9700, Unix, VMS, PC, others, $$$.
[23] SPICE, UC Berkeley, tel: +1 510/643-6687, VMS, Unix, Mac, PC, *+C.

Commercial neurosimulators
[24] NeuralWorks Professional II/Plus, NeuralWare, USA, tel: +1 412/741-5959, PC, Mac,
Sun, others, many algorithms, $$$.
[25] NeuroSoft, HNC, USA, tel: +1 619/546-8877, PC, SUN, many, $$.
[26] ANSIM, SAIC, USA, tel: +1 619/546-6290, PC, others, $$.
[27] DESIRE/NEUNET, Granino Korn, Industrial Consultants, USA, tel: +1 509/687-3390,
PC, Sun, $$.
[28] Mimenice, Mimetics, France, tel: +33 1-40910990, Sun/X-Windows, $$$.
[29] Nestor NDS, Nestor, USA, tel: +1 401/331-9640, PC, $$.
[30] Brainmaker, California Scientific Software, USA, tel: +1 800/284-8112, $$.
[31] NEURAL DESK, Neural Computer Sciences, UK, tel: +44 703-667775, PC/Windows,
several, $$.
[32] MacBrain, Neurix, USA, tel: +1 617/577-1202, Mac, many, $$.
[33] Plexi, Lucid, USA, fax: +1 415/329-8480, Symbolics, SUN, several, $$.
[34] SN2.8, Neuristique, France, xd@bop.neuristique.fr, Unix, many, $$$.
[35] Cognitron, Cognitive Software, USA, tel: +1 317/577-4158, Mac, several, $$.
[36] OWL, HyperLogic Corporation, USA, tel: +1 619/746-2765, C library, $$. 
[37] ANNE, Casey Bahr, George Mason University, king@gmuvax2.gmu.edu, Hypercube I
and II, backpropation and others, part of Hypercube's User's Library.

New developments in neurosimulators             (These systems will become available in the
                                                near future.)
[38] GALATEA, Mimetics, France, tel: +33 1-40910990, Sun, dedicated hardware, many.
[39] MetaNet, Jaap Murre, MRC APU, UK, jaap.murre@mrc-apu.cam.ac.uk, PC, many.
[40] SESAME, Alexander Linden, GMD, Germany, al@nathan.gmd.de, Unix/X-Windows,
many.

General neurosimulators
[41] Khoros, New Mexico State University, e-mail?, hardware?, *+C.
[42] Simulink, MathWorks, USA, e-mail or phone?, hardware?, $$$?.
[43] XERION, University of Toronto, e-mail?, hardware?, *+C.