From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!ncar!noao!arizona!gudeman Fri Jan 31 10:27:16 EST 1992
Article 3294 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!gatech!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <12067@optima.cs.arizona.edu>
Date: 30 Jan 92 12:33:52 GMT
Sender: news@cs.arizona.edu
Lines: 145

In article  <384@tdatirv.UUCP> Stanley Friesen writes:
]In article <11884@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]|But you are assuming something about the subject's methods.  You are
]|assuming that the subject is using understanding rather than some
]|trick to answer questions.
]
]No, we are assuming that the test is sufficiently rigorous to reveal all
]likely forms of cheating.  The passing the test does suggest (not prove)
]that the subject does understand.

OK, for the 83rd time:  The hypothesis of the test is that it is
passed by a computer.  This hypothesis directly entails that the test
can be passed by purely syntactic means.  Therefore passing the test
does not suggest that the subject understands unless you are going to
assert that understanding is the same as syntax.  If you don't believe
this equivalence, then the test tells you nothing at all about
understanding.  More specifically, to make the test convincing you
have to convince me that any syntactic structure that generates the
right sentences must give rise to the experience of understanding.

]Why does 'syntactic manipulation' not generate 'understanding', and more

What reason do I have to believe that syntactic manipulation _does_
generate understanding?  If you are going to blatantly claim the
equivalence of two quite dissimilar things, the burden of proof is on
you.

]important, how is it different than 'semantic manipulation' (or whatever
]terminology you prefer)?  What is the recognition criterion that allows me
]to say something is 'semantic' rather than 'syntactic'?  Without a way to
]tell the difference, the distinction seems useless.

OK, for the 78th time: you can tell the difference between semantics
and syntax in your own thoughts by means of introspection.  The
process of thinking "about" something is common to all people, and
everyone knows that the experience exists.  If you are going to claim
that syntax is the same as semantics, you have to show how this
experience can arise through purely syntactic means.

]|... machines work
]|by taking input, shuffling it according to some set of rules, and
]|spitting the result out...
]
]So?  How can you be sure that human minds do not operate this way also?

It is irrelevant whether human minds work this way or not.  The Turing
test claim amounts to the claim that if a machine can the Turing test
then the machine experiences semantics, understanding, consciousness,
or whatever.  But even if you _could_ show that it is possible for
some syntax to give rise to semantics, that does not prove that any
syntax that produces the same set of sentences must also give rise to
semantics.

]But, if human minds also operate as machines (of whatever sort), then
]your argument results in the conclusion that there is no such thing as
]understanding.

No, you misunderstood my argument.  I was not arguing that syntax
cannot give rise to semantics.  I was arguing that you don't _know_
that syntax can give rise to semantics, and that even if you assume
that it can, you still have no reason to suppose that syntax must
always give rise to semantics just because the syntax happens to mimic
the behavior of humans.

]You are operating on the assumption that we are not machines.

No I'm not.  I'm making no assumptions about the nature of human
cognition.  It is the pro-AIers who are making assumptions.  Not only
about the nature of the mind, but also about the nature of the
relationship between syntax and semantics.  Their assumptions about
the nature of the mind have a long and distinguished history, and
there are some arguments to support the assumptions.  So although I
don't think hardly of the pro-AIers on this net could make the
arguments, I am not picking on that particular assumption.]

I'm picking on this assumption about the relationship between syntax
and semantics.  Namely, the assumption that any syntax that generates
an "intelligent" appearing language must give rise to consciousness.
I see no motivation whatsoever for this assumption.

]But in a sense we are not.  The idea was that we would generate a test that
]tested for understanding, and carefully construct to show up cheating.

And at the same time you hypothesise a situation in which the test can
be passed by cheating.

]To go back to your 'Pekinese' example, what if the test included actual
]interactions with dogs?  (I.e. a practicum rather than just a oral or
]written test).  At what point does the test become sufficiently strong
]that deciding it is inadequate becomes an unacceptible alternative
]compared to deciding the the machine understands?

You are assuming without logical justification that there is a way to
detect the experience of semantics by observing behavior.  Given that
all behavior could be generated by completely automatic processing
(the hypothesis of the question), why should I believe that the
questions and other tests demonstrate anything more profound than very
clever automation?

]|(1) Humans answer questions by knowledge and understanding; therefore
]|when a human answers a question we have evidence of knowledge and
]|understanding in the human.
]|
]|(2) Machines answer questions by syntactic manipulation; therefore
]|when a machine answers a question we have evidence of good syntactic
]|manipulation.
]|
]|Those are the points we can both agree on.
]
]Not entirely.  I would say that (2) begs the question.  It is what we are
]trying to determine.  (At least assuming there is a clear distinction between
]syntax and semantics).  We *don't* know all possible mechanisms by which a
]machine might attempt to pass a test, so (2) is ahead of itself.

No, (2) is made in complete absense of the assumptions about the
nature of semantics.  It only mentions syntax.  And quite the
contrary, we _do_ know that machines are restricted to syntax.  This
doesn't mean that machines cannot use semantics, is only means that
_if_ machines use semantics, then semantics is reducible to syntax.

]| Now if you want to claim
]|that your test shows understanding on the part of the computer, your
]|options are limited (as far as I see) to the following possibilities:
]|
]|(A) Show that understanding is the same as syntax maniuplation.
]|
]|(B) Show that computers answer questions through understanding
]|regardless of any other mechanism they may have.
]|
]|(C) Show why question-answering is a good test for understanding in a
]|computer even though computers don't answer questions by
]|understanding.
]
]Not quite.  I would revise C slightly, since it includes as an assumption
]the thing we are trying to test.

The choices were meant to be mutually exclusive.  If you choose (C),
that implies that you did not choose (B).  So the negation of (B) is a
legitimate assumption for (C).  Actually, I suspect that (C) is the
choice of most of the more sophisticated pro-AI thinkers.  (A) is
clearly impossible and (B) entails problems with causality.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


