From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Thu Jan 16 17:20:01 EST 1992
Article 2676 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2676 sci.philosophy.tech:1827 sci.logic:790
Newsgroups: comp.ai.philosophy,sci.philosophy.tech,sci.logic
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Penrose on Man vs. Machine
Message-ID: <1992Jan13.230532.26592@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Jan7.031553.24886@oracorp.com> <1992Jan7.105117.7193@husc3.harvard.edu> <1992Jan7.191853.17310@gpu.utcs.utoronto.ca> <5925@skye.ed.ac.uk> <1992Jan9.211337.14379@gpu.utcs.utoronto.ca> <5939@skye.ed.ac.uk>
Date: Mon, 13 Jan 1992 23:05:32 GMT

In article <5939@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>>Actually, it would _waste_ a lot of time arguing about definitions
>>>of understanding.
>
>I stand by the claim that it will be a waste of time.  A tremendous
>waste of time.  Virtually every net debate about definitions confirms
>this, in my opinion.

This only shows that there is a lot of ambiguity about the term 
('understanding') and confirms, in my opinion, a need to decide on a definition
of sorts - something which could be used as a criterion.

<AP>:
>>I have to disagree. Understanding a language is an issue burdened
>>with too many irrelevant (for the present purpose) side issues.
>
>Not at all.  All that's required is the distinnction between
>a language you understand and one you do not.  I happen not to
>understadn Chinese.  I don't see much problem is deciding this.
>I don't have to go into subtleties.

But the point is not whether you know what you understand or not, but how do
you decide whether someone (or something) else understands. Can you say
what criteria do you use? Isn't this the place where knowing exactly what
understanding is would help?
>
>To me, all this stuff about the need to define "understand"
>amounts to little more than saying let's not even think about
>Searle's argument.  

Not at all. It rather comes from realisation that Searle uses the term in 
a loose and inconsistent way (applies different criteria to a person and to 
the CR).
...
>                ...And it's possible to ask someone to define
>terms forever without really getting anywhere.
>
So what? Discussing an issue which is not well-defined is even a faster way
of getting to nowhere.

<AP>:
>>However, how does Searle know that the person inside does not understand the
>>group theory? He makes his friend to ask the person questions to find out if
>>he/she gives correct answers. If the person gave the correct answers, would
>>Searle demand to open his/hear head to see inside if there is understanding
>>there? And if he did look inside the person's head, would it help him to know
>>if there is understanding inside?
>
>This is, of course, just the approach I've been calling "behaviorist".
>What you're suggesting is that if the Room acts like it understands
>Chinese, then we ought to say it does.  But that's begging the
>question, as Searle has pointed out.
>
You mean the question whether CR understands? No, it is not begging the 
question, it is just using an only criterion available to decide. 
If you (or Searle) think that acting 'like it understands' is 
not enough, then say what is. Do you have a better method of determinig 
whether someone understands (English, Chinese, group theory, or whatever)?
Whether you call this approach 'behaviorist' or something else contributes
nothing to the problem. 

>Indeed, what would you think if Searle found the person answering
>questions about group theory was actually being given the answers
>by someone else (via some radio link, say)?  
>
I do not see how this contributes to the discussion. This someone (or something)
else would be just a part of (distributed) CR, don't you think so?

>Moreover, we do open people up, and thereby learn more about how they
>work; 

We do (though not me personally, thanks God), but does it help to find out
whether they understand something? Again I find this irrelevant to the 
discussion.

>    ...and we can certainly look at the workings of programs.  So
>we're not confined to looking at behavior of the sort tested in
>the Turing Test.
>
However, looking at the workings of the program could only help to decide
if there is understanding there if we had a clear what understanding is.

i<AP>:
>>In brief: Searle is using different criteria to determine if the CR 
>>understands something (group theory, chinese, or whatever) than the
>>ones he applies to a person (inside). The whole argument is from the
>>begining stack against CR and hence is invalid. Only by using the
>>same criteria can we validly determine if the both system (a
>>person and the CR) posess the same attribute (of understanding).
>
>We can use different criteria to determine whether there's oxygen
>in the Earth's atmosphere than we do to determine the same for
>Venus.  Is that supposed to be fatally flawed?
>
I am puzzled that you find this a good analogy. If criterion for determining
presence of oxygen on Venus was applied to Earth and gave positive results and
then we turned around and said 'This is not enough', then you'd have a point.
However, before we apply any method to Venus, we have to be sure that it works
on Earth.
I have a better analogy for you:
You go for an exam to test your knowledge of Chinese, with a colleague who
looks oriental. He is given a test which he answers correctly and passes. You
are given a like test, you answer correctly but then the examiner says: 'You
look caucasian (I assume for the sake of argument), I am not sure whether you
'really' understand the spirit of the language'. What would you say?

>At some point, we may well have a test for understanding, that
>we regard as sufficient, that we can apply to both humans and
>machines.  However, the lack of this test does not show that
>any argument that machines cannot understand (merely by running
>the right program) must be wrong.
>
Of course not but some may be flawed. To decide we need to agree on criteria
used to determine whether someone (or something) understands. 
Assume that on opening the CR Searle finds a lot of complicated hardware or
wetware. What then? What would he had to find to decide that CR understands?
'Intentionality'? What does it look like? He opens the CR not knowing what he
is looking for. How can he find understanding, if he does not know what it
looks like? You will probably say "But he knows what is lack of understanding".
Wrong! Since shuffling those symbols according to some rules produces the 
correct answers, may be that's all there is to it! 
No? Then we are back to the question
'what is understanding?'. Can one 'understand' abstract group theory?  Can
a computer? What is the difference between these two 'understandings'? I stick
to this example to avoid spurious problem of sensory input (which also comes
into play when we talk about understanding language). Mr. Zeleny claimed
that the notion of 'finitude' is inaccessible to machines. However, as I have
pointed out, there are programs (compilers) which can detect inifite loops
in a, say, fortran program. Consequently, I do not see how to decide that 
computer understanding would be inferior to mine (even though I'd like to
think so). 
We do not know (understand) how our wetware works, and
still conclude that someone understands (Chinese, group theory, etc). So the 
only sensible thing is, in my opinion, to use the same criteria in case of CR.

>-- jeff

Andrzej
-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


