From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:32 EST 1992
Article 2626 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2626 sci.philosophy.tech:1799 sci.logic:781
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech,sci.logic
Subject: Re: Penrose on Man vs. Machine
Message-ID: <5939@skye.ed.ac.uk>
Date: 10 Jan 92 17:15:29 GMT
References: <1992Jan7.031553.24886@oracorp.com> <1992Jan7.105117.7193@husc3.harvard.edu> <1992Jan7.191853.17310@gpu.utcs.utoronto.ca> <5925@skye.ed.ac.uk> <1992Jan9.211337.14379@gpu.utcs.utoronto.ca>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 83

In article <1992Jan9.211337.14379@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <5925@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>In article <1992Jan7.191853.17310@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>
>>>It would save a lot of time and bandwidth if we first decided what is meant by
>>>'undertanding'. Is there an unambiguous definition of the notion of
>....
>>
>>Actually, it would _waste_ a lot of time arguing about definitions
>>of understanding.

I stand by the claim that it will be a waste of time.  A tremendous
waste of time.  Virtually every net debate about definitions confirms
this, in my opinion.

But if you really want to go into this, you might start with
"intentionality", eg in Searle's book by that name.

>>>In my opinion much of the Chinese Room discussion falls into this category.
>>
>>One of the virtues of the Chinese Room is that it relies on our
>>ability to distinguish between languages we can understand and
>>ones we cannot, something we can do without much worry about how
>>"understand" in this sense is defined.
>
>I have to disagree. Understanding a language is an issue burdened
>with too many irrelevant (for the present purpose) side issues.

Not at all.  All that's required is the distinnction between
a language you understand and one you do not.  I happen not to
understadn Chinese.  I don't see much problem is deciding this.
I don't have to go into subtleties.

To me, all this stuff about the need to define "understand"
amounts to little more than saying let's not even think about
Searle's argument.  And it's possible to ask someone to define
terms forever without really getting anywhere.

>Note how much traffic was generated by rising a problem of sensory
>input.

But almost everything raises a huge amount of traffic, so I'm not
sure what we can conclude from it.

>However, how does Searle know that the person inside does not understand the
>group theory? He makes his friend to ask the person questions to find out if
>he/she gives correct answers. If the person gave the correct answers, would
>Searle demand to open his/hear head to see inside if there is understanding
>there? And if he did look inside the person's head, would it help him to know
>if there is understanding inside?

This is, of course, just the approach I've been calling "behaviorist".
What you're suggesting is that if the Room acts like it understands
Chinese, then we ought to say it does.  But that's begging the
question, as Searle has pointed out.

Indeed, what would you think if Searle found the person answering
questions about group theory was actually being given the answers
by someone else (via some radio link, say)?  

Moreover, we do open people up, and thereby learn more about how they
work; and we can certainly look at the workings of programs.  So
we're not confined to looking at behavior of the sort tested in
the Turing Test.

>In brief: Searle is using different criteria to determine if the CR 
>understands something (group theory, chinese, or whatever) than the
>ones he applies to a person (inside). The whole argument is from the
>begining stack against CR and hence is invalid. Only by using the
>same criteria can we validly determine if the both system (a
>person and the CR) posess the same attribute (of understanding).

We can use different criteria to determine whether there's oxygen
in the Earth's atmosphere than we do to determine the same for
Venus.  Is that supposed to be fatally flawed?

At some point, we may well have a test for understanding, that
we regard as sufficient, that we can apply to both humans and
machines.  However, the lack of this test does not show that
any argument that machines cannot understand (merely by running
the right program) must be wrong.

-- jeff


