From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!comp.vuw.ac.nz!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system Wed Oct 14 14:58:28 EDT 1992
Article 7194 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!uunet!comp.vuw.ac.nz!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system
Newsgroups: comp.ai.philosophy
Subject: Re: Failing Algorithms (Was: Brain and Mind (Was: Logic and God))
Message-ID: <maecsB1w165w@CODEWKS.nacjack.gen.nz>
>From: system@CODEWKS.nacjack.gen.nz (Wayne McDougall)
Date: Thu, 08 Oct 92 09:19:09 NZDST
Organization: The Code Works Limited, PO Box 10-155, Auckland, New Zealand
Lines: 55


In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu 
(Michael Tobis) writes:
>If I am an algorithm, and you remove neurons essential to the 
implementation
>of the algorithm, I don't know the answer to your question. Since I 
can't>imagine that I am an algorithm, I can't imagine what answer you 
propse.
>

Consider then a "supervisory algorithm", which monitors the input and 
output to this algorithm. With a memory, this "supervsiory algorithm" 
(SA) could "notice" that a particular class of question, which earlier it 
could answer (according to its memory), no longer produces an answer or 
produces a different answer.
With its memory, it could "test" its hypothesis that its algorithm has 
been "damaged". It could then initiate repairs, either from first 
principls (!) or from relearning (from a backup?). At the very least it 
could mark as questionable the output from that algorithm.


Compare this with one of my self-appointed tasks after I'm concussed: 
recalling a sample of times tables (and observing the speed of my 
responses). Notice that those in the early stages of Alzheimers disease 
are AWARE of their loss of abilities. Note those who will stop driving 
because they determine they no longer have the skills required to do so.

To extend the notion of SA, it need not proceed to an infinite 
regression. While a level of SA would be desirable, there is no reason 
why a lowerlevel SA could no monitor those at higher levels, and SA peers 
could monitor each other. Compare this with the voting / polling with the 
computer system on the space shuttles.


Now if you have a well-grounded system with suitable feedback, the SA can 
also test the validity of its sub-algorithms by "testing" the answers. 
Internal reasonableness tests can also operate. Note that this step would 
be beyond many human check-out operators who will punch the wrong key on 
the till (missing neurons?) and try to bill me $100 for a packet of 
toothpaste. 

In summary: A SA can imagine it is exectuting a lower-level algorithm, 
and respond accordingly. A dense net of A and SA's could become quite 
impressive IMHO.


-- 
  Wayne McDougall, BCNU
  This .sig unintentionally left blank.

Hello! I'm a .SIG Virus. Copy me and spread the fun.
  Scot Wilson, The man with One T.            u9044140@wraith.cs.uow.edu.au
    "Its Scot with 1 T because all the other four letter words were used."

Warning: This .signature may be suddenly, and without prior notice, be cut shor


