Newsgroups: comp.ai.neural-nets,comp.answers,news.answers
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!news.psc.edu!scramble.lm.com!news.ysu.edu!news.radio.cz!newsbreeder.radio.cz!news.radio.cz!CESspool!news.maxwell.syr.edu!cam-news-hub1.bbnplanet.com!news.bbnplanet.com!howland.erols.net!worldnet.att.net!news.mathworks.com!newsgate.duke.edu!interpath!news.interpath.net!news.interpath.net!sas!newshost.unx.sas.com!hotellng.unx.sas.com!saswss
From: saswss@unx.sas.com (Warren Sarle)
Subject: comp.ai.neural-nets FAQ, Part 7 of 7: Hardware
Originator: saswss@hotellng.unx.sas.com
Sender: news@unx.sas.com (Noter of Newsworthy Events)
Message-ID: <nn7.posting_857188838@hotellng.unx.sas.com>
Supersedes: <nn7.posting_854510442@hotellng.unx.sas.com>
Approved: news-answers-request@MIT.EDU
Date: Sat, 1 Mar 1997 04:00:39 GMT
Expires: Sat, 5 Apr 1997 04:00:38 GMT
X-Nntp-Posting-Host: hotellng.unx.sas.com
Reply-To: saswss@unx.sas.com (Warren Sarle)
Organization: SAS Institute Inc., Cary, NC, USA
Keywords: frequently asked questions, answers
Followup-To: comp.ai.neural-nets
Lines: 525
Xref: glinda.oz.cs.cmu.edu comp.ai.neural-nets:36400 comp.answers:24521 news.answers:8168

Archive-name: ai-faq/neural-nets/part7
Last-modified: 1997-02-03
URL: ftp://ftp.sas.com/pub/neural/FAQ7.html
Maintainer: saswss@unx.sas.com (Warren S. Sarle)

This is part 7 (of 7) of a monthly posting to the Usenet newsgroup
comp.ai.neural-nets. See the part 1 of this posting for full information
what it is all about.

========== Questions ========== 
********************************

Part 1: Introduction
Part 2: Learning
Part 3: Generalization
Part 4: Books, data, etc.
Part 5: Free software
Part 6: Commercial software
Part 7: Hardware, etc.

   Neural Network hardware?
   Unanswered FAQs

------------------------------------------------------------------------

Subject: Neural Network hardware?
=================================

Thomas Lindblad notes on 96-12-30: 

   The reactive tabu search alogortm has been implemented by the
   Italians, in Trento. ISA and VME and soon PCI boards are available.
   We tested the system with the IRIS and SATIMAGE data and it did
   better than most other chips. 

   The Neuroclassifier is available from Holland still and is also the
   fastest nnw chip or a transient time less than 100 ns. 

   JPL is making another chip, ARL in WDC is making another, so there
   are a few things going on ... 

Overview articles: 

 o Ienne, Paolo and Kuhn, Gary (1995), "Digital Systems for Neural
   Networks", in Papamichalis, P. and Kerwin, R., eds., Digital Signal
   Processing Technology, Critical Reviews Series CR57 Orlando, FL: SPIE
   Optical Engineering, pp 314-45, 
   ftp://mantraftp.epfl.ch/mantra/ienne.spie95.A4.ps.gz or 
   ftp://mantraftp.epfl.ch/mantra/ienne.spie95.US.ps.gz 
 o ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/neurhard.ps (1995) 
 o ftp://ftp.urc.tue.nl/pub/neural/hardware_general.ps.gz (1993) 

Various NN HW information can be found in the Web site 
http://www1.cern.ch/NeuralNets/nnwInHepHard.html (from people who really use
such stuff!). Several applications are described in 
http://www1.cern.ch/NeuralNets/nnwInHepExpt.html 

Further WWW pointers to NN Hardware:
http://msia02.msi.se/~lindsey/nnwAtm.html

Here is a short list of companies: 

1. HNC, INC.
++++++++++++

     HNC Inc.
     5930 Cornerstone Court West
     San Diego, CA 92121-3728

     619-546-8877  Phone
     619-452-6524  Fax
     HNC markets:
      Database Mining Workstation (DMW), a PC based system that
      builds models of relationships and patterns in data. AND
      The SIMD Numerical Array Processor (SNAP). It is an attached
      parallel array processor in a VME chassis with between 16 and 64 parallel
      floating point processors. It provides between 640 MFLOPS and 2.56 GFLOPS
      for neural network and signal processing applications.  A Sun SPARCstation
      serves as the host.  The SNAP won the IEEE 1993 Gordon Bell Prize for best
      price/performance for supercomputer class systems.

2. SAIC (Sience Application International Corporation)
++++++++++++++++++++++++++++++++++++++++++++++++++++++

      10260 Campus Point Drive
      MS 71, San Diego
      CA 92121
      (619) 546 6148
      Fax: (619) 546 6736

3. Micro Devices
++++++++++++++++

      30 Skyline Drive
      Lake Mary
      FL 32746-6201
      (407) 333-4379
      MicroDevices makes   MD1220 - 'Neural Bit Slice'
      Each of the products mentioned sofar have very different usages.
      Although this sounds similar to Intel's product, the
      architectures are not.

4. Intel Corp
+++++++++++++

      2250 Mission College Blvd
      Santa Clara, Ca 95052-8125
      Attn ETANN, Mail Stop SC9-40
      (408) 765-9235
      Intel was making an experimental chip (which is no longer produced):
      80170NW - Electrically trainable Analog Neural Network (ETANN)
      It has 64 'neurons' on it - almost fully internally connectted
      and the chip can be put in an hierarchial architecture to do 2 Billion
      interconnects per second.
      Support software by
        California Scientific Software
        10141 Evening Star Dr #6
        Grass Valley, CA 95945-9051
        (916) 477-7481
      Their product is called 'BrainMaker'.

5. NeuralWare, Inc
++++++++++++++++++

      Penn Center West
      Bldg IV Suite 227
      Pittsburgh
      PA 15276
      They only sell software/simulator but for many platforms.

6. Tubb Research Limited
++++++++++++++++++++++++

      7a Lavant Street
      Peterfield
      Hampshire
      GU32 2EL
      United Kingdom
      Tel: +44 730 60256

7. Adaptive Solutions Inc
+++++++++++++++++++++++++

      1400 NW Compton Drive
      Suite 340
      Beaverton, OR 97006
      U. S. A.
      Tel: 503-690-1236;   FAX: 503-690-1249

8. NeuroDynamX, Inc.
++++++++++++++++++++

      P.O. Box 14
      Marion, OH  43301-0014
      Voice (614) 387-5074  Fax: (614) 382-4533
      Internet:  jwrogers@on-ramp.net

      InfoTech Software Engineering purchased the software and
      trademarks from NeuroDynamX, Inc. and, using the NeuroDynamX tradename,
      continues to publish the DynaMind, DynaMind Developer Pro and iDynaMind
      software packages. 

9. IC Tech, Inc.
++++++++++++++++

    *  NRAM (Neural Retrieve Associative Memory)   is available as a stand-alone chip or
       a functional unit which can be embedded inside another chip, e.g., a digital signal
       processor or SRAM.  Data storage procedure is compatible with conventional 
       memories, i.e., a single presentation of the data is sufficient. Set-up and hold 
       times are comparable with existing devices of similar technology dimensions. 
       Data retrieval capability is where NRAM excels. When addressed, this content 
       addressable memory produces the one previously-stored pattern that matches the 
       presented data sequence most closely.  If no matching pattern is found, no data is 
       returned.   This set of error-correction and smart retrieval tasks are accomplished without 
       comparators, processors, or other external logic.  Number of data bits is adjustable.
       Optimized circuitry consumes little power.  Many applications of NRAM exist in rapid 
       search of large databases, template matching, and associative recall.

     *  NRAM (neural retrieve associate memory) development environment includes PC card
        with on board NRAM chip and C++ source code to address the device.

       
        Contact:

        IC Tech, Inc.
        2157 University Park Dr.
        Okemos, MI 48864
        (517) 349-4544
        (517) 349-2559  (FAX)
        http://www.ic-tech.com
        ictech@ic-tech.com

And here is an incomplete overview of known Neural Computers with their
newest known reference.

\subsection*{Digital}
\subsubsection{Special Computers}

{\bf AAP-2}
Takumi Watanabe, Yoshi Sugiyama, Toshio Kondo, and Yoshihiro Kitamura.
Neural network simulation on a massively parallel cellular array
processor: AAP-2.
In International Joint Conference on Neural Networks, 1989.

{\bf ANNA}
B.E.Boser, E.Sackinger, J.Bromley, Y.leChun, and L.D.Jackel.\\
Hardware Requirements for Neural Network Pattern Classifiers.\\
In {\it IEEE Micro}, 12(1), pages 32-40, February 1992.

{\bf Analog Neural Computer}
Paul Mueller et al.
Design and performance of a prototype analog neural computer.
In Neurocomputing, 4(6):311-323, 1992.

{\bf APx -- Array Processor Accelerator}\\
F.Pazienti.\\
Neural networks simulation with array processors.
In {\it Advanced Computer Technology, Reliable Systems and Applications;
Proceedings of the 5th Annual Computer Conference}, pages 547-551.
IEEE Comput. Soc. Press, May 1991. ISBN: 0-8186-2141-9.

{\bf ASP -- Associative String Processor}\\
A.Krikelis.\\
A novel massively associative processing architecture for the
implementation artificial neural networks.\\
In {\it 1991 International Conference on Acoustics, Speech and
Signal Processing}, volume 2, pages 1057-1060. IEEE Comput. Soc. Press,
May 1991.

{\bf BSP400}
Jan N.H. Heemskerk, Jacob M.J. Murre, Jaap Hoekstra, Leon H.J.G.
Kemna, and Patrick T.W. Hudson.
The bsp400: A modular neurocomputer assembled from 400 low-cost
microprocessors.
In International Conference on Artificial Neural Networks. Elsevier
Science, 1991.

{\bf BLAST}\\
J.G.Elias, M.D.Fisher, and C.M.Monemi.\\
A multiprocessor machine for large-scale neural network simulation.
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 469-474. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7883-0164-1.

{\bf CNAPS Neurocomputer}\\
H.McCartor\\
Back Propagation Implementation on the Adaptive Solutions CNAPS
Neurocomputer.\\
In {\it Advances in Neural Information Processing Systems}, 3, 1991.

{\bf GENES~IV and MANTRA~I}\\
Paolo Ienne and  Marc A. Viredaz\\
{GENES~IV}: A Bit-Serial Processing Element for a Multi-Model
   Neural-Network Accelerator\\
Journal of {VLSI} Signal Processing, volume 9, no. 3, pages 257--273, 1995.

{\bf MA16 -- Neural Signal Processor}
U.Ramacher, J.Beichter, and N.Bruls.\\
Architecture of a general-purpose neural signal processor.\\
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 443-446. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7083-0164-1.

{\bf Mindshape}
Jan N.H. Heemskerk, Jacob M.J. Murre Arend Melissant, Mirko Pelgrom,
and Patrick T.W. Hudson.
Mindshape: a neurocomputer concept based on a fractal architecture.
In International Conference on Artificial Neural Networks. Elsevier
Science, 1992.

{\bf mod 2}
Michael L. Mumford, David K. Andes, and Lynn R. Kern.
The mod 2 neurocomputer system design.
In IEEE Transactions on Neural Networks, 3(3):423-433, 1992.

{\bf NERV}\\
R.Hauser, H.Horner, R. Maenner, and M.Makhaniok.\\
Architectural Considerations for NERV - a General Purpose Neural
Network Simulation System.\\
In {\it Workshop on Parallel Processing: Logic, Organization and
Technology -- WOPPLOT 89}, pages 183-195. Springer Verlag, Mars 1989.
ISBN: 3-5405-5027-5.

{\bf NP -- Neural Processor}\\
D.A.Orrey, D.J.Myers, and J.M.Vincent.\\
A high performance digital processor for implementing large artificial
neural networks.\\
In {\it Proceedings of of the IEEE 1991 Custom Integrated Circuits
Conference}, pages 16.3/1-4. IEEE Comput. Soc. Press, May 1991.
ISBN: 0-7883-0015-7.

{\bf RAP -- Ring Array Processor }\\
N.Morgan, J.Beck, P.Kohn, J.Bilmes, E.Allman, and J.Beer.\\
The ring array processor: A multiprocessing peripheral for connectionist
applications. \\
In {\it Journal of Parallel and Distributed Computing}, pages
248-259, April 1992.

{\bf RENNS -- REconfigurable Neural Networks Server}\\
O.Landsverk, J.Greipsland, J.A.Mathisen, J.G.Solheim, and L.Utne.\\
RENNS - a Reconfigurable Computer System for Simulating Artificial
Neural Network Algorithms.\\
In {\it Parallel and Distributed Computing Systems, Proceedings of the
ISMM 5th International Conference}, pages 251-256. The International
Society for Mini and Microcomputers - ISMM, October 1992.
ISBN: 1-8808-4302-1.

{\bf SMART -- Sparse Matrix Adaptive and Recursive Transforms}\\
P.Bessiere, A.Chams, A.Guerin, J.Herault, C.Jutten, and J.C.Lawson.\\
From Hardware to Software: Designing a ``Neurostation''.\\
In {\it VLSI design of Neural Networks}, pages 311-335, June 1990.

{\bf SNAP -- Scalable Neurocomputer Array Processor}
E.Wojciechowski.\\
SNAP: A parallel processor for implementing real time neural networks.\\
In {\it Proceedings of the IEEE 1991 National Aerospace and Electronics
Conference; NAECON-91}, volume 2, pages 736-742. IEEE Comput.Soc.Press,
May 1991.

{\bf Toroidal Neural Network Processor}\\
S.Jones, K.Sammut, C.Nielsen, and J.Staunstrup.\\
Toroidal Neural Network: Architecture and Processor Granularity
Issues.\\
In {\it VLSI design of Neural Networks}, pages 229-254, June 1990.

{\bf SMART and SuperNode}
P. Bessi`ere, A. Chams, and P. Chol.
MENTAL : A virtual machine approach to artificial neural networks
programming. In NERVES, ESPRIT B.R.A. project no 3049, 1991.


\subsubsection{Standard Computers}

{\bf EMMA-2}\\
R.Battiti, L.M.Briano, R.Cecinati, A.M.Colla, and P.Guido.\\
An application oriented development environment for Neural Net models on
multiprocessor Emma-2.\\
In {\it Silicon Architectures for Neural Nets; Proceedings for the IFIP
WG.10.5 Workshop}, pages 31-43. North Holland, November 1991.
ISBN: 0-4448-9113-7.

{\bf iPSC/860 Hypercube}\\
D.Jackson, and D.Hammerstrom\\
Distributing Back Propagation Networks Over the Intel iPSC/860
Hypercube}\\
In {\it IJCNN91-Seattle: International Joint Conference on Neural
Networks}, volume 1, pages 569-574. IEEE Comput. Soc. Press, July 1991.
ISBN: 0-7083-0164-1.

{\bf SCAP -- Systolic/Cellular Array Processor}\\
Wei-Ling L., V.K.Prasanna, and K.W.Przytula.\\
Algorithmic Mapping of Neural Network Models onto Parallel SIMD
Machines.\\
In {\it IEEE Transactions on Computers}, 40(12), pages 1390-1401,
December 1991. ISSN: 0018-9340.

------------------------------------------------------------------------

Subject: Unanswered FAQs
========================

If you have good answers for any of these questions, please send them to the
FAQ maintainer at saswss@unx.sas.com. 

 o How many training cases do I need? 
 o How should I split the data into training and validation sets? 
 o What error functions can be used? 
 o What are some good constructive training algorithms? 
 o How can on-line/incremental training be done effectively? 
 o How can I invert a network? 
 o How can I select important input variables? 
 o How to handle missing data? 
 o Should NNs be used in safety-critical applications? 
 o My net won't learn! What should I do??? 
 o My net won't generalize! What should I do??? 

------------------------------------------------------------------------

That's all folks (End of the Neural Network FAQ).

Acknowledgements: Thanks to all the people who helped to get the stuff
                  above into the posting. I cannot name them all, because
                  I would make far too many errors then. :->

                  No?  Not good?  You want individual credit?
                  OK, OK. I'll try to name them all. But: no guarantee....

  THANKS FOR HELP TO:
(in alphabetical order of email adresses, I hope)

 o Steve Ward <71561.2370@CompuServe.COM> 
 o Allen Bonde <ab04@harvey.gte.com> 
 o Accel Infotech Spore Pte Ltd <accel@solomon.technet.sg> 
 o Ales Krajnc <akrajnc@fagg.uni-lj.si> 
 o Alexander Linden <al@jargon.gmd.de> 
 o Matthew David Aldous <aldous@mundil.cs.mu.OZ.AU> 
 o S.Taimi Ames <ames@reed.edu> 
 o Axel Mulder <amulder@move.kines.sfu.ca> 
 o anderson@atc.boeing.com 
 o Andy Gillanders <andy@grace.demon.co.uk> 
 o Davide Anguita <anguita@ICSI.Berkeley.EDU> 
 o Avraam Pouliakis <apou@leon.nrcps.ariadne-t.gr> 
 o Kim L. Blackwell <avrama@helix.nih.gov> 
 o Mohammad Bahrami <bahrami@cse.unsw.edu.au> 
 o Paul Bakker <bakker@cs.uq.oz.au> 
 o Stefan Bergdoll <bergdoll@zxd.basf-ag.de> 
 o Jamshed Bharucha <bharucha@casbs.Stanford.EDU> 
 o Carl M. Cook <biocomp@biocomp.seanet.com> 
 o Yijun Cai <caiy@mercury.cs.uregina.ca> 
 o L. Leon Campbell <campbell@brahms.udel.edu> 
 o Cindy Hitchcock <cindyh@vnet.ibm.com> 
 o Clare G. Gallagher <clare@mikuni2.mikuni.com> 
 o Craig Watson <craig@magi.ncsl.nist.gov> 
 o Yaron Danon <danony@goya.its.rpi.edu> 
 o David Ewing <dave@ndx.com> 
 o David DeMers <demers@cs.ucsd.edu> 
 o Denni Rognvaldsson <denni@thep.lu.se> 
 o Duane Highley <dhighley@ozarks.sgcl.lib.mo.us> 
 o Dick.Keene@Central.Sun.COM 
 o DJ Meyer <djm@partek.com> 
 o Donald Tveter <drt@mcs.com> 
 o Daniel Tauritz <dtauritz@wi.leidenuniv.nl> 
 o Wlodzislaw Duch <duch@phys.uni.torun.pl> 
 o E. Robert Tisdale <edwin@flamingo.cs.ucla.edu> 
 o Athanasios Episcopos <episcopo@fire.camp.clarkson.edu> 
 o Frank Schnorrenberg <fs0997@easttexas.tamu.edu> 
 o Gary Lawrence Murphy <garym@maya.isis.org> 
 o gaudiano@park.bu.edu 
 o Lee Giles <giles@research.nj.nec.com> 
 o Glen Clark <opto!glen@gatech.edu> 
 o Phil Goodman <goodman@unr.edu> 
 o guy@minster.york.ac.uk 
 o Horace A. Vallas, Jr. <hav@neosoft.com> 
 o Joerg Heitkoetter <heitkoet@lusty.informatik.uni-dortmund.de> 
 o Ralf Hohenstein <hohenst@math.uni-muenster.de> 
 o Gamze Erten <ictech@mcimail.com> 
 o Ed Rosenfeld <IER@aol.com> 
 o Franco Insana <INSANA@asri.edu> 
 o Janne Sinkkonen <janne@iki.fi> 
 o Javier Blasco-Alberto <jblasco@ideafix.cps.unizar.es> 
 o Jean-Denis Muller <jdmuller@vnet.ibm.com> 
 o Jeff Harpster <uu0979!jeff@uu9.psi.com> 
 o Jonathan Kamens <jik@MIT.Edu> 
 o J.J. Merelo <jmerelo@kal-el.ugr.es> 
 o Dr. Jacek Zurada <jmzura02@starbase.spd.louisville.edu> 
 o Jon Gunnar Solheim <jon@kongle.idt.unit.no> 
 o Josef Nelissen <jonas@beor.informatik.rwth-aachen.de> 
 o Joey Rogers <jrogers@buster.eng.ua.edu> 
 o Subhash Kak <kak@gate.ee.lsu.edu> 
 o Ken Karnofsky <karnofsky@mathworks.com> 
 o Kjetil.Noervaag@idt.unit.no 
 o Luke Koops <koops@gaul.csd.uwo.ca> 
 o Kurt Hornik <Kurt.Hornik@tuwien.ac.at> 
 o Thomas Lindblad <lindblad@kth.se> 
 o Clark Lindsey <lindsey@particle.kth.se> 
 o Lloyd Lubet <llubet@rt66.com> 
 o William Mackeown <mackeown@compsci.bristol.ac.uk> 
 o Maria Dolores Soriano Lopez <maria@vaire.imib.rwth-aachen.de> 
 o Mark Plumbley <mark@dcs.kcl.ac.uk> 
 o Peter Marvit <marvit@cattell.psych.upenn.edu> 
 o masud@worldbank.org 
 o Miguel A. Carreira-Perpinan<mcarreir@moises.ls.fi.upm.es> 
 o Yoshiro Miyata <miyata@sccs.chukyo-u.ac.jp> 
 o Madhav Moganti <mmogati@cs.umr.edu> 
 o Jyrki Alakuijala <more@ee.oulu.fi> 
 o Jean-Denis Muller <muller@bruyeres.cea.fr> 
 o Michael Reiss <m.reiss@kcl.ac.uk> 
 o mrs@kithrup.com 
 o Maciek Sitnik <msitnik@plearn.edu.pl> 
 o R. Steven Rainwater <ncc@ncc.jvnc.net> 
 o Nigel Dodd <nd@neural.win-uk.net> 
 o Barry Dunmall <neural@nts.sonnet.co.uk> 
 o Paolo Ienne <Paolo.Ienne@di.epfl.ch> 
 o Paul Keller <pe_keller@ccmail.pnl.gov> 
 o Pierre v.d. Laar <pierre@mbfys.kun.nl> 
 o Michael Plonski <plonski@aero.org> 
 o Lutz Prechelt <prechelt@ira.uka.de> [creator of FAQ] 
 o Richard Andrew Miles Outerbridge <ramo@uvphys.phys.uvic.ca> 
 o Rand Dixon <rdixon@passport.ca> 
 o Robin L. Getz <rgetz@esd.nsc.com> 
 o Richard Cornelius <richc@rsf.atd.ucar.edu> 
 o Rob Cunningham <rkc@xn.ll.mit.edu> 
 o Robert.Kocjancic@IJS.si 
 o Randall C. O'Reilly <ro2m@crab.psy.cmu.edu> 
 o Rutvik Desai <rudesai@cs.indiana.edu> 
 o Robert W. Means <rwmeans@hnc.com> 
 o Stefan Vogt <s_vogt@cis.umassd.edu> 
 o Osamu Saito <saito@nttica.ntt.jp> 
 o Warren S. Sarle <saswss@unx.sas.com> 
 o Scott Fahlman <sef+@cs.cmu.edu> 
 o <seibert@ll.mit.edu> 
 o Sheryl Cormicle <sherylc@umich.edu> 
 o Ted Stockwell <ted@aps1.spa.umn.edu> 
 o Stephanie Warrick <S.Warrick@cs.ucl.ac.uk> 
 o Serge Waterschoot <swater@minf.vub.ac.be> 
 o Thomas G. Dietterich <tgd@research.cs.orst.edu> 
 o Thomas.Vogel@cl.cam.ac.uk 
 o Ulrich Wendl <uli@unido.informatik.uni-dortmund.de> 
 o M. Verleysen <verleysen@dice.ucl.ac.be> 
 o VestaServ@aol.com 
 o Sherif Hashem <vg197@neutrino.pnl.gov> 
 o Matthew P Wiener <weemba@sagi.wistar.upenn.edu> 
 o Wesley Elsberry <welsberr@orca.tamu.edu> 
 o Dr. Steve G. Romaniuk <ZLXX69A@prodigy.com> 

The FAQ was created in June/July 1991 by Lutz Prechelt; he also maintained
the FAQ until November 1995. Warren Sarle maintains the FAQ since December
1995. 


Bye

  Warren & Lutz

Previous part is part 6. 

Neural network FAQ / Warren S. Sarle, saswss@unx.sas.com

-- 

Warren S. Sarle       SAS Institute Inc.   The opinions expressed here
saswss@unx.sas.com    SAS Campus Drive     are mine and not necessarily
(919) 677-8000        Cary, NC 27513, USA  those of SAS Institute.
 *** Do not send me unsolicited commercial or political email! ***

