From newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!ogicse!news.u.washington.edu!stein.u.washington.edu!forbis Wed Oct 14 14:58:11 EDT 1992
Article 7169 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!ogicse!news.u.washington.edu!stein.u.washington.edu!forbis
>From: forbis@stein.u.washington.edu (Gary Forbis)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct8.202536.4796@u.washington.edu>
Date: 8 Oct 92 20:25:36 GMT
Article-I.D.: u.1992Oct8.202536.4796
References: <1992Oct5.022907.6131@meteor.wisc.edu> <1992Oct5.181741.7241@spss.com> <1992Oct8.174224.20547@meteor.wisc.edu>
Sender: news@u.washington.edu (USENET News System)
Organization: University of Washington, Seattle
Lines: 36

I have deleted much of the original text so as to focus on one assertion.

In article <1992Oct8.174224.20547@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>In article <1992Oct5.181741.7241@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>>In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu 
>>(Michael Tobis) writes:
>
>>Let's try to put this in perspective.  In a truly astonishing mismatch of 
>>hardware to software, we have chosen to execute an enormously complicated 
>>AI program on the single-processor, 0.1-flops processor consisting of 
>>John Searle in a room.  Consider a single question and answer, which require
>>perhaps a billion instructions and offer Searle steady employment for many 
>>years.  
>
>If consciousness is purely algorithmic, then surely the rate of implementation
>of the algorithm doesn't matter. If there is more to it (grounding,
>neurophysiology, quanta, or even something as yet unidentified) then the
>rate may be significant, but on the hypothesis that consciousness arises
>from formal symbol manipulations, it cannot.

I suspect that for consciousness to arise, there must be some synchronization
between the entity being conscious and the thing about which the entity is
conscious.  That is to say, it seems unlikely to me that an entity will be
conscious of events which cannot be presented to it within the natural time
scale of its recognition.  Events may be so short that the entity only
perceives spurious events of no consequence.  Events may be so long that the
entity only perceives stability.

I would like to point out that computers are not formal symbol manipulators but
what they do may be interpreted as such since they were designed specifically
for this interpretation.  I think this defines the question more as "can 
entities whose functions can be interpreted as doing formal symbol 
manipulations be conscious?"  If the human brain can be mapped and defined
in terms of formal symbol manipulation then I would think the answer is "yes".

--gary forbis@u.washington.edu


