From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!ncar!noao!arizona!gudeman Tue Jan 28 12:17:36 EST 1992
Article 3136 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!swrinde!elroy.jpl.nasa.gov!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11884@optima.cs.arizona.edu>
Date: 25 Jan 92 00:58:09 GMT
Sender: news@cs.arizona.edu
Lines: 148

In article  <42143@dime.cs.umass.edu> Joseph O'Rourke writes:
]In article <11774@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]>I am not saying that you can't establish the understanding of a
]>machine "beyond the shadow of a doubt", I'm saying you have no reason
]>at all to believe that a machine understands just because you can't
]>stump it with hard questions.  A human, yes.  A machine, no.  The
]>difference is that humans use understanding to answer questions and a
]>machine uses syntactic manipulation. 
]
]I thought the issue was to attempt to gain overwhelming evidence
]that a subject understands, in a manner that does not assume anything
]about the subject's methods.

But you are assuming something about the subject's methods.  You are
assuming that the subject is using understanding rather than some
trick to answer questions.  If the subject is getting answers from
another party or uses a syntactic manipulation to answer all the
questions then there is no real understanding.

]You seem to be saying there is no
]subject-independent test of understanding:  what would be convincing
]evidence that a human understands is not convincing if someone reveals
]that the human is really a robot.

Not really.  I'm saying that merely by hypothesizing that a machine is
able to answer all of the questions, you are hypothesizing that the
questions do not really test understanding.  _By hypothesis_ then, the
test you propose does not test understanding.

I hasten to point out that my assertion does not come from a prior
assumption that machines don't understand, but from my view of
"understanding" and of how machines work.  I know that machines work
by taking input, shuffling it according to some set of rules, and
spitting the result out.  So if a machine can answer the questions,
then there is a set of rules that can be followed to turn the
questions automatically into answers.  But if such a set of rules
exist, then any question can be answered simply by following the
rules.

Such an answer does not show understanding of the subject (or even of
the question), it only shows correct application of the rules.  So
once you assume the existence of such a set of rules, then questions
no longer test the understanding of anything, human or machine.

Suppose such a program exists for talking about dogs.  Suppose then
that a person was able to memorize the rules of the algorithm and to
apply them in real time.  Now you give your "test of understanding"
and ask all sorts of questions about dogs, dog breeds, dog behavior,
dog care, etc.  The person gives nothing but deep, penetrating
answers, and you are fully convinced that this person is a world-class
expert on dogs.  Now your pet Pekinese comes bounding into the room
and this word-class dog expert screams "What is that!?" as he leaps
for the safety of a nearby table.  You answer, in some surprise no
doubt, that it is a Pekinese.  He goes on in a panic, "Is it
dangerous?  What's that thing waving in back of it?  Does that mean
it's going to attack?"

Does this person understand about dogs?

I suspect some people will answer that he does understand about dogs,
and that he just doesn't have sensory knowledge of dogs.  But that is
ingenuous.  For when this person talks about dogs, he doesn't even
have an abstract idea of "dog" in his mind.  All he has in mind is the
the sentence "What does it mean when a dog wags its tail?" and when he
answers the question he doesn't even consider the meanings of the
words, only their form and order.  He is, in essence talking about
sentences, not dogs.  And although he arguably has great understanding
of sentences about dogs, he has no understanding of dogs themselves.

When you tell this person that your dog is a Pekinese, he doesn't even
associate that with the "dog" word he knows.  To him "Pekinese" is
just a word without meaning, a symbol without any reference except the
set of rules it affects in the syntactic transformation.  He has a
rule

  Pekinese => small, yippy, obnoxious

but even the words "small", "yippy", and "obnoxious" have no meaning
other than the further transformations they imply.

]  You seem to be stacking the deck
]against machines by saying that "humans use understanding to answer
]questions."

Only because humans are less likely to cheat by using a horrendously
complex set of syntactic rules to simulate understanding.

]>Unless you are prepared to argue
]>that understanding is identical to syntactic manipulations, the test
]>that proves a human understands tells you nothing about the computer
]>(except that the syntactic manipluations are damn good).
]
]Again, the test wouldn't *prove* anything, it would just provide
]very strong evidence for the hypothesis that the subject understands,

Lets not get pedantic about the word "prove".  As I said, the test
provides no evidence at all once you hypothesize that the questions
could be answered by strictly syntactic transformation.

]i.e., grasps meanings.  I don't see why I must argue anything
]about the mechanism of understanding in order to be confident in
]my conclusion, irrespective of whether the subject is a human or
]a machine.

You don't have to argue the mechanism of understanding, you have to
argue that the mechanism for answering questions is understanding.
Otherwise testing by question does not reveal understanding.

(1) Humans answer questions by knowledge and understanding; therefore
when a human answers a question we have evidence of knowledge and
understanding in the human.

(2) Machines answer questions by syntactic manipulation; therefore
when a machine answers a question we have evidence of good syntactic
manipulation.

Those are the points we can both agree on.  Now if you want to claim
that your test shows understanding on the part of the computer, your
options are limited (as far as I see) to the following possibilities:

(A) Sho that understanding is the same as syntax maniuplation.

(B) Show that computers answer questions through understanding
regardless of any other mechanism they may have.

(C) Show why question-answering is a good test for understanding in a
computer even though computers don't answer questions by
understanding.

(D) Define "understanding" as the ability to answer questions.  (Of
course you are no longer talking about the same thing I am, your side
of the argument becomes trivially true, and sentence (1) becomes
meaningless.)

]>And _of course_ I define understanding to be a matter of internal
]>self-awareness.
]
]I cannot find a mention of "internal self-awareness" in any definition
]of "understand" in a dictionary.

Actually, the self-awareness is not really part of the argument except
to show why understanding is not identical to syntactic manipulation.
If think understanding _is_ identical to syntactic manipulation (A)
then please say so specifically.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


