From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!caen!garbo.ucc.umass.edu!dime!orourke Tue Jan 28 12:16:30 EST 1992
Article 3056 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!caen!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42143@dime.cs.umass.edu>
Date: 23 Jan 92 14:41:51 GMT
References: <11774@optima.cs.arizona.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 34

In article <11774@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>I am not saying that you can't establish the understanding of a
>machine "beyond the shadow of a doubt", I'm saying you have no reason
>at all to believe that a machine understands just because you can't
>stump it with hard questions.  A human, yes.  A machine, no.  The
>difference is that humans use understanding to answer questions and a
>machine uses syntactic manipulation. 

I thought the issue was to attempt to gain overwhelming evidence
that a subject understands, in a manner that does not assume anything
about the subject's methods.  You seem to be saying there is no
subject-independent test of understanding:  what would be convincing
evidence that a human understands is not convincing if someone reveals
that the human is really a robot.  You seem to be stacking the deck
against machines by saying that "humans use understanding to answer
questions."

>Unless you are prepared to argue
>that understanding is identical to syntactic manipulations, the test
>that proves a human understands tells you nothing about the computer
>(except that the syntactic manipluations are damn good).

Again, the test wouldn't *prove* anything, it would just provide
very strong evidence for the hypothesis that the subject understands,
i.e., grasps meanings.  I don't see why I must argue anything
about the mechanism of understanding in order to be confident in
my conclusion, irrespective of whether the subject is a human or
a machine.

>And _of course_ I define understanding to be a matter of internal
>self-awareness.

I cannot find a mention of "internal self-awareness" in any definition
of "understand" in a dictionary.


