Newsgroups: comp.robotics
Path: brunix!news.Brown.EDU!noc.near.net!howland.reston.ans.net!spool.mu.edu!uwm.edu!cs.utexas.edu!utnut!torn!news.ccs.queensu.ca!steve
From: steve@cs.ubc.ca (Steve Gillen)
Subject: Re: Micro-coding Robots?
Message-ID: <CAvtw4.7IH@knot.ccs.queensu.ca>
Sender: news@knot.ccs.queensu.ca (Netnews control)
Organization: Department of Physics, Queen's University at Kingston
References: <22ooc0INNe10@uwm.edu>
Date: Wed, 28 Jul 1993 16:02:28 GMT
Lines: 106



In article <22ooc0INNe10@uwm.edu> you write:
>The notion of 'microcoding' robotic behavior has been knocking around in
>my head since I began fidgeting with neural networks... even though those
>two topics have almost nothing to do with each other.
>
>The idea is that instead of putting a microprocessor on-board a robot, I'd
>simply put a *memory* chip in it.  A little support circuitry (a clock to
>latch the address lines, maybe some R-C delays, power MOSFETs for motors)
>would be sufficient, the memory chip should probably be an EEPROM.
>
>The 'address' lines would be used as inputs and the 'data' lines as outputs.
>Inputs would necessarily be binary, like micro-switch closures or maybe
>threshold-crossings or the differentials (change) of analog sensors, and
>some outputs could be fed back to inputs to implement counters, branching,
>and subsumption.  You could divide up the input/output lines any way you
>like, maybe even making some into a program counter or signifying the
>"state" of the machine...
>
>I originally concocted this idea to implement neural-net behavior.  Since
>digital neural-net computations are quite intensive, I figured I'd train
>the network simulation, then feed it all possible inputs while storing the
>outputs as data on the memory chip.  Thus, the chip would *behave* exactly
>as the neural-net... but instantaneously, without run-time computation.
>
>It then occurred to me that I didn't *have* to use a neural-net algorithm
>to "program" such a device.  I could do it with *any* algorithm which gives
>digital outputs for digital inputs.  The point is that the *computations*
>are already done, the results at run-time are practically instantaneous in
>comparison.
>
>The difficulties I see are:
>
>1.) Writing the 'program' to the chip:
>	Actually getting the data onto the chip could require more
>	hardware than the robot!  How much to EEPROM programmers cost?
>	How difficult would it be to program the memory right on the bot?
>
>2.) What kind of memory to use:
>	EEPROM would be nice, since you could turn the bot off and on.
>	A 4-MB SIMM would give lots of storage space.  What's the biggest
>	low-power static RAM chip nowadays anyway?
>
>	(Incidentally, I thought about using PLA's but it seems that they
>	 are a bit trickier to 'program' and may not be RE-programmable.
>	 Besides, a memory *guarantees* that you can do *all* input/output
>	 pairs, where a PLA may not have enough gates to do the job.)
>
>3.) Planning the interface-support circuitry:
>	What to include?  How many?  All inputs ('address' lines) would
>	necessarily be latched and clocked through CMOS to make sure they
>	hit the memory as clean binary signals and give it enough time to
>	resolve the proper outputs.
>		Counter(s)?
>		Delay(s)?
>		A/D and/or D/A converter(s)?
>		Differential amplifiers (to detect *changes* in outputs)
>		Integral amplifiers (for long-term summation)
>		A white noise source!  (Gotta have *some* randomness.)
>		Other stuff???
>
>As per my original notion, this type of robot would be an *ideal* test-bed
>for neural-net robotics...  no run-time computation, little analog circuitry,
>practically instantaneous action!  All that laborious computation would be
>done before the 'bot ever powers-up.
>
>Any thoughts or comments?  Anyone?
>
>RICK MILLER            <rick@ee.uwm.edu> Voice: +1 414 221 3403 FAX: -4744
>16203 WOODS            Send me a postcard, and I'll return another to you!
>53150-8615 USA         Sendu al mi bildkarton, kaj mi redonos alian al vi!

  This is my first time replying on this conference so forgive me if I screw
anything up here.   
  I have had similar ideas for what I thought were simple applications that
shouldn't need the power of a microprocessor.  My trouble has alway come down
to that the support circuitry starts to mushroom on you as you try to add 
features.  You can't always get the bit size of chips you need and before
you know it you have 10 chips which is getting a bit more involved.  
  With the newest microcontroller chips now you can use just 1 or 2 chips to 
do anything (ie upgradable by software rather than more chips). You can get 
single chip controllers that you can attach to "serial EEROM chips" that 
generate their own programming voltages (ie they work like regular memory 
chips but they are slow to write to )  This way you could include a 
syncronous serial line to the controller (some chips have this included)
and download your net data reasonably painlessly to let the microcontroller
dump to the EEPROM.  I realize that the speed is not as great as a pure 
hardware solution but if you can just read the net output reasults from the
memory and put them out to the I/O ports of the controller it shouldn't 
take that long and how fast does it have to be? Unless your computing 
vision type data (ie huge amounts of data fast) I would think this would
be fast enough  and the hardware could be small enought to use as 
subprocessors for each limb (Like Commander Data on Star Trek TNG).
  Any how let me know if I'm missing the boat on the requirements of the
application.  I love to read this stuff . I've only been suscribed to this
newsgroup for 3 or 4 weeks now but its great and I enjoy your submissions 
as they seem practical to the homebrew (ie not robotic reseach lab) techs.

*------------------------------------------------------------------------*
*   Steve Gillen                *       One of these days I gotta        *
*   Electronics Technologist    *       buy me one of those witty        *
*   Queens University Physics   *       tag lines!                       *
*   Kingston, Ont. Canada       *                                        *
*   steve@phy-server.phy.queensu.ca                                      *
*------------------------------------------------------------------------*
