From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!elroy.jpl.nasa.gov!ames!ncar!noao!arizona!gudeman Tue Jan 28 12:16:27 EST 1992
Article 3053 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!elroy.jpl.nasa.gov!ames!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11774@optima.cs.arizona.edu>
Date: 23 Jan 92 10:52:37 GMT
Sender: news@cs.arizona.edu
Lines: 45

In article  <42064@dime.cs.umass.edu> Joseph O'Rourke writes:
]In article <11722@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]>It is impossible in principle for one agent to distinguish between
]>"knowledge" and "understanding" in another agent, because the
]>difference is only sensible to the agent who has (or doesn't have)
]>understanding.
]
]It is also impossible in principle to exclude the possibility that
]we were all created a minute ago, memories intact, as Russell pointed
]out.  I was responding to the question of why conversation could be 
]strong empirical evidence for understanding.  If you define understanding 
]to require unobservable "internal self-awareness," then of course
]it is impossible to establish beyond the shadow of doubt.

I am not saying that you can't establish the understanding of a
machine "beyond the shadow of a doubt", I'm saying you have no reason
at all to believe that a machine understands just because you can't
stump it with hard questions.  A human, yes.  A machine, no.  The
difference is that humans use understanding to answer questions and a
machine uses syntactic manipulation.  Unless you are prepared to argue
that understanding is identical to syntactic manipulations, the test
that proves a human understands tells you nothing about the computer
(except that the syntactic manipluations are damn good).

And _of course_ I define understanding to be a matter of internal
self-awareness.  If understanding were only a matter of behavior, then
the statement "An entity that behaves as though it understands can be
assumed to understand" would be trivially true and uninteresting; and
I would have to be an idiot to argue against it.  Please give me a
little more credit than that.

Furthermore, if you want to argue the behaviorist definition of
understanding then please do so openly (and prepare to be assaulted
from all sides).  I'm getting really tired of seeing statements that
implicitly assume the behaviorist definitions of understanding
without an explicit reference to it.  In my view, this is either a
sign that the writer is a naive behaviorist who doesn't understand the
significance of the view (and the serious problems associated with
it), or it is a sleazy rhetorical tactic that the writer uses to get
the logical advantages of behaviorism for arguing strong AI without
committing to the problems that go with it.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


