From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke Tue Jan 28 12:17:51 EST 1992
Article 3153 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42277@dime.cs.umass.edu>
Date: 26 Jan 92 04:00:33 GMT
References: <11884@optima.cs.arizona.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 46

In article <11884@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>...
>You don't have to argue the mechanism of understanding, you have to
>argue that the mechanism for answering questions is understanding.
>Otherwise testing by question does not reveal understanding.

I would prefer to approach understanding as if it were a black-box
property of a subject, independent of the mechanism for answering
the questions.  After all, part of the point of this exercise is
to question whether the mechanisms are fundamentally different.
I want to approach undertanding as a scientist would approach an unknown 
phenomenon.  I want to see if the results "seem" like understanding, and 
for this I do not need to assume that the mechanism for answering is
understanding.  Indeed, it is counter to the point of the questions
in the first place.

>(1) Humans answer questions by knowledge and understanding; therefore
>when a human answers a question we have evidence of knowledge and
>understanding in the human.
>
>(2) Machines answer questions by syntactic manipulation; therefore
>when a machine answers a question we have evidence of good syntactic
>manipulation.
>
>Those are the points we can both agree on.  ...

Of course I can hardly dispute this, but since I view these questions
as a scientific experiment trying to determine whether or not there is
undertanding, I would prefer these:

(1) Humans answer questions by X.
(2) Machines answer questions by Y.

Might X and Y be fundamentally the same, or functionally equivalent, or
equally powerful, or both be sufficient for understanding?  It is clear
that X does not equal Y, and we sometimes label X "understanding" and
Y "syntactic manipulation."  But the point is to throw away those
labels temporarily, and to wonder what the experimental evidence points.
If the evidence is overwhelmingly in favor of "the machine understands
group theory" (using your example from another post), *except* for the
fact that you KNOW it is using syntactic manipulation to answer the
group theory questions, doesn't that make you wonder if perhaps
those syntactic manipulations, implemented on a real machine, somehow,
someway, are sufficient for understanding?  Or that the "knowledge
and understaning" of the human group theory expert are somehow, someway, 
present in the equally-competent machine?


