From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert Tue May 12 15:49:55 EDT 1992
Article 5507 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!garrot.DMI.USherb.CA!uxa.ecn.bgu.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Message-ID: <1992May9.031854.17165@mp.cs.niu.edu>
Date: 9 May 92 03:18:54 GMT
References: <6648@skye.ed.ac.uk> <1992May4.181702.13708@mp.cs.niu.edu> <6691@skye.ed.ac.uk>
Organization: Northern Illinois University
Lines: 102

In article <6691@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992May4.181702.13708@mp.cs.niu.edu> rickert@mp.cs.niu.edu (Neil Rickert) writes:
>
>BTW we agree about some things (see the end), so I'm not sure
>why we're disagreeing about some other things.

  We are often disagreeing because you are often assuming from my
criticism of the CR argument that I an a proponent of symbolic
approaches to AI.  But I have never supported that approach.  I
criticize the CR because of its flaws.  I find it persuasive, but
not conclusive as an argument against symbolic AI, but it is far
from persuasive as a proof of the impossibility of AI.

>>Prove me wrong by discussing what would constitute computer thought.  Maybe
>>the discussion will be enlightening.
>
>It isn't an issue on which I have anything I want to say.

  Since you have been challenging people on whether certain types of
computation involve thought when done in a computer, it is strange to
have you admit you have nothing to say on what constitutes computer
thought.  However even saying that much is saying something.

>>I will comment in more detail.  If a computer were turned off, then on reboot
>>claimed to have been thinking all the time, including the time it was
>>turned off, and if there had been no infusion of external data (a disk
>>transfusion), I would probably treat this as confusion. [...]
>>                             On the other hand, if the computer
>>claimed to be thinking the whole time only in the sense that it was
>>completely unaware of the time gap, I would treat that as quite
>>unsurprising.
>
>So I take it the answer is that you would not believe it.
>
>That is, you have some better evidence than its behavior.

  There are often times, with people as well as with computers, that
you have better evidence than behavior.  Of course you should always
use the best evidence you have, perhaps retaining a little skepticism
at times as to whether the "best" evidence is really best.

>>  Would the computer's behavior show that it had been turned off?  Only in
>>the sensed that it would not be aware of events that occurred while it was
>>turned off.  This need not be much different from a person who went into
>>a brief coma, then on recovery was unaware that there had been any
>>interruption of consciousness.
>
>So the answer is that its behavior would not show it had been
>turned off (instead of, say, just not paying attention -- why
>invoke something as drastic as a coma?).

 Somebody who was not paying attention is usually aware that they
were not paying attention and might have missed something.  However
with a coma, or with turning off a computer, such awareness is
likely to be absent.

>>  Then let's stop the silly arguments, and wait till a computer passes
>>the TT.  Then let's look inside and see whether it was really faking or
>>not.
>
>I agree that we should look inside.  I would also say that we may
>not be in a position to reach conclusions about machine understanding
>until we have some programs to consider.
>
>But I don't think we have to give up all other attempts to answer the
>question in the meantime.  It's an interesting philosophical problem,
>or at least I think it is.

  It depends on whether it is done constructively or destructively.
If Searle's critique is treated as an argument against particular
symbolic approaches to AI, it is quite constructive, for it points
to where they fail (in handling semantics, for example).  If treated
as final proof that all attempts at AI are doomed to failure it is
destructive, for it merely diverts attention from the real issues of
understanding the human mind.

>>  My main reason for doubting that it would be faked is that to
>>successfully fake an extensive TT would require a computer program of
>>unimagineable combinatorial complexity, and I consider that unlikely in
>>the extreme.
>
>Maybe.  Or maybe following rules (as we're supposed to imagine
>in the Chinese Room) will count as "faking".  This may become
>clearer once we know more about programs, more about humans, etc.

 Much depends on what you mean by "following rules".  If the computer
directly manipulates Chinese characters according to rules, I am
somewhat skeptical that it will be successful.  Perhaps in saying
so I am just revealing my limited imagination.  If it were successful,
that might count as faking, but I would reserve judgement on that.

 But I am more interested in seeing the development of a program with
the right kind of learning, and then sending the machine off to
Chinese school.  In this case the computer might be moving data
around according to rules, but it would not be directly applying rules
to the Chinese characters.  Indeed there may be no way to distinguish
from within the program which types of data actually correspond to
the Chinese characters.  Although in a stricly formal sense, as
a TM, the computer is following rules, I find it hard to interpret
this is manipulating the Chinese characters according to rules.
Moreover, this is how you might have a good chance to come up with
machine understanding.


