From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:48:35 EDT 1992
Article 5359 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!zrz.tu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6639@skye.ed.ac.uk>
Date: 1 May 92 18:03:01 GMT
References: <1992Mar28.141316.16968@oracorp.com> <6590@skye.ed.ac.uk> <524@tdatirv.UUCP>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 78

In article <524@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <6590@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>|In article <1992Mar28.141316.16968@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>|>
>|>I can only reiterate what I have said before.  If you wish to show that
>|>computers lack something that humans possess, it seems to me that you
>|>need to show (a) that computers lack it, and (b) that humans possess
>|>it. If you only prove (a) then you have not proved your point.
>|
>|Not to your satisfaction, perhaps.  I see no reason to _prove_ that
>|humans have understanding in the sense required for the Chinese Room,
>|for instance.
>
>That is *not* what Daryl is asking for; he, and I, are asking for evidence
>(not even proof, just evidence) that humans understand in a way that
>Searle's CR does not.

I see no reason to prove that humans have understanding in the sense
required for the Chinese Room.  And the idea that this is something we
ought to doubt, that we need and don't have evidence that humans can
understand in that sense, looks to me like nothing more than an
attempt to avoid having to consider the arguments against computer
understanding.

And of course there are familiar arguments that computers do not
have such understading (or at least not merely by running a program).

So there is your "way" that the CR does not.

Of course, there's another popular way to avoid considering the
arguments, namely to demand definitions of "understand".

>We are really all agreed that humans understand things, what is dividing
>us is how to determine if something else does.
>
>So Searle has shown that the CR lacks something he chooses to call
>'understanding', why should I believe that humans have this particular
>brand of 'understanding' and not some other?  Why should Searle's definition
>(or lack of it) be any better than mine? or Darryl's?

Maybe you don't care about understanding in that sense.  Maybe, for
example, you're happy with a behavioral definition.  In which case,
we're not interested in the same issue and hence have little reason to
continue disagreeing.

>|What interests me in this is whether or not computers can understand,
>|and not in whether or not I can convince a determined skeptic that
>|humans can understand.  Other people may have different interests,
>|of course.
>
>That is also my interest (in this group).  But what is understanding?
>What is it that we humans have that we are looking for in computers?
>Until we know that we cannot answer the question.

What interests me in this is whether or not computers can understand,
and not in whether or not I can explain what understanding is.
Other people may have different interests.

Of course, it may be that you have something interesting to say
about what "what is understanding?"  If so, why not say that
instead of asking the "anti-AI" people to do it?

>|Now, if an argument against computer understanding also applied
>|to humans, I would regard that as reason to conclude the argument
>|was wrong.  But I'm certainly not going to conclude the argument
>|is wrong just because no one has yet shown it doesn't apply to humans.
>|Why should I?
>
>But what if I show reason to believe it might, or could, apply to humans?
>
>Does this not at least weaken the force of the argument?

Yes, if you can show such reasons.

But producing such reasons is quite different from saying I have
to produce reasons to the contrary!

-- jd


