From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sdd.hp.com!uakari.primate.wisc.edu!ames!ncar!noao!arizona!gudeman Tue Jan 28 12:15:13 EST 1992
Article 2972 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sdd.hp.com!uakari.primate.wisc.edu!ames!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11722@optima.cs.arizona.edu>
Date: 21 Jan 92 22:48:33 GMT
Sender: news@cs.arizona.edu
Lines: 63

In article  <42032@dime.cs.umass.edu> Joseph O'Rourke writes:
]In article <6031@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
]>
]>I don't see any necessary connection between conversation and
]>understanding. 
]
]If the conversation were so unrestricted that you could turn it
]into an interrogation of a willing subject, then I think it could
]constitute as strong an indication of the existence of understanding
]as is conceivable.  You could probe deeply on specific topics,
]and ask a series of questions that could only be answered by
]someone who truly understands the topic. 

But the computer is following a program, a set rules to answer all of
your questions.  By hypothesis, then, all questions that you would ask
can be answered by purely mechanical means.  So it follows directly
from the hypothesis that there _are_ no questions that (in your words)
"could only be answered by someone who truly understands the topic".
If you think that such questions exist, that only shows that you do
not know the syntactic rules that might lead to the answers.

] It seems to me this
]would be totally convincing.  In the face of such a conversation,
]you could only entertain the possibility that despite appearances,
]there is no understanding, by using the word "understand" in a 
]new sense, a sense that demands a particular mechanism of understanding 
]with no observable consequences.

That is not a new sense, that is the original sense of the word.  It
is your sense that is new.  "Understanding" implies an internal
self-awareness that is not observable outside of the entity who
understands.  The power to answer questions implies only "knowledge".
It is impossible in principle for one agent to distinguish between
"knowledge" and "understanding" in another agent, because the
difference is only sensible to the agent who has (or doesn't have)
understanding.

When you ask questions that "could only be answered by someone who
truly understands the topic", what you mean is that you ask questions
that you believe will require the formulation of new knowledge.
Presumably, humans formulate new knowledge based on _understanding_ of
the old knowledge.  But if a computer is able to formulate knew
knowledge, it must do so by a set of rules, for that is how computers
work, and a set of rules can always be reduced to knowledge.*

Thus when you hypothesise that a computer can pass the Turing test,
you hypothesise that knowledge alone is sufficient for passing it.
Fine.  But once you hypothesize that knowledge alone is sufficient for
passing the Turing test, whatever reason could you have for suggesting
that understanding is involved?  Knowledge alone suffices.  Why invent
other things that aren't needed?

I really wish someone would answer this question because I am quite
frankly confused about it.  And I didn't get any satisfactory response
from my "Turing Test Argument" article.

* Actually, I'm being generous in ascribing "knowledge" to computers.
Computers actually work strictly with "information", a much weaker
notion than "knowledge".
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


