From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system Wed Oct 14 14:58:58 EDT 1992
Article 7241 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system
Newsgroups: comp.ai.philosophy
Subject: Re: Speed of Algorithms (Was Re: Brain & Mind (Was Re: Logic & God))
Message-ID: <345JsB1w165w@CODEWKS.nacjack.gen.nz>
>From: system@CODEWKS.nacjack.gen.nz (Wayne McDougall)
Date: Mon, 12 Oct 92 13:41:37 NZDST
Organization: The Code Works Limited, PO Box 10-155, Auckland, New Zealand
Lines: 47

Quote follows:
In article <1992Oct5.181741.7241@spss.com> markrose@spss.com (Mark 
Rosenfelder) writes:
>In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu 
>(Michael Tobis) writes:

>Let's try to put this in perspective.  In a truly astonishing mismatch 
of
>hardware to software, we have chosen to execute an enormously 
complicated
>AI program on the single-processor, 0.1-flops processor consisting of 
>John Searle in a room.  Consider a single question and answer, which 
require
>perhaps a billion instructions and offer Searle steady employment for 
many 
>years.  

If consciousness is purely algorithmic, then surely the rate of 
implementaion
of the algorithm doesn't matter. If there is more to it (grounding,
neurophysiology, quanta, or even something as yet unidentified) then the
rate may be significant, but on the hypothesis that consciousness arises
from formal symbol manipulations, it cannot.
END OF QUOTE

I would like to suggest that any algorithm with real-time i/o (which I 
suggest any respectable AI program would require), IS time-dependant. 
That is, an algorithm may fail (or produce "wrong results") depending on 
its speed of execution.

For a non-AI example, consider an algorithmic system monitoring a nuclear 
power plant.

For slightly more relevant systems, consider the effect of an AI system 
critically listening to music (sampling rates) or trying to comprehend 
human speech. A appreciate these problems are at a "technical" level. My 
question is whether it is possible to separate "technical" problems from 
the AI question, and whether there are "non-technical" problems. Perhaps 
some that arise in practice, if not in theory. 
I don't find it difficult to conceive of "step-like" processes that 
require a certain threhold of power for an algorithm to function at all.

-- 
  Wayne McDougall, BCNU
  This .sig unintentionally left blank.

Hello! I'm a .SIG Virus. Copy me and spread the fun.


