From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue Jan 28 12:18:23 EST 1992
Article 3192 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <384@tdatirv.UUCP>
Date: 27 Jan 92 21:52:28 GMT
References: <11884@optima.cs.arizona.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 167

In article <11884@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
|In article  <42143@dime.cs.umass.edu> Joseph O'Rourke writes:
|]I thought the issue was to attempt to gain overwhelming evidence
|]that a subject understands, in a manner that does not assume anything
|]about the subject's methods.
|
|But you are assuming something about the subject's methods.  You are
|assuming that the subject is using understanding rather than some
|trick to answer questions.

No, we are assuming that the test is sufficiently rigorous to reveal all
likely forms of cheating.  The passing the test does suggest (not prove)
that the subject does understand.

| If the subject is getting answers from
|another party or uses a syntactic manipulation to answer all the
|questions then there is no real understanding.

I agree about the first, but remain to be convinced about the second.

Why does 'syntactic manipulation' not generate 'understanding', and more
important, how is it different than 'semantic manipulation' (or whatever
terminology you prefer)?  What is the recognition criterion that allows me
to say something is 'semantic' rather than 'syntactic'?  Without a way to
tell the difference, the distinction seems useless.

|I hasten to point out that my assertion does not come from a prior
|assumption that machines don't understand, but from my view of
|"understanding" and of how machines work.  I know that machines work
|by taking input, shuffling it according to some set of rules, and
|spitting the result out.  So if a machine can answer the questions,
|then there is a set of rules that can be followed to turn the
|questions automatically into answers.  But if such a set of rules
|exist, then any question can be answered simply by following the
|rules.

So?  How can you be sure that human minds do not operate this way also?
Our brains are composed of electro-chemical transducers that are as much
constrained by the laws of physics and chemistry as any other machine.

Or to put it differently, in what way are we *not* machines ourselves?
[I do not mean how do we differ from currently producable machines, I
mean how are we different in principle than any machine?]

|Such an answer does not show understanding of the subject (or even of
|the question), it only shows correct application of the rules.  So
|once you assume the existence of such a set of rules, then questions
|no longer test the understanding of anything, human or machine.

But, if human minds also operate as machines (of whatever sort), then
your argument results in the conclusion that there is no such thing as
understanding.

You are operating on the assumption that we are not machines.  I consider
this a questionable assumption.  True, we are not *digital* *computers*,
but that is only a small subset of all machines.

|Only because humans are less likely to cheat by using a horrendously
|complex set of syntactic rules to simulate understanding.

Oh really, how do you know that?  How do you think human minds do work?
And what about human minds is unavailable to machines?

|]Again, the test wouldn't *prove* anything, it would just provide
|]very strong evidence for the hypothesis that the subject understands,
|
|Lets not get pedantic about the word "prove".  As I said, the test
|provides no evidence at all once you hypothesize that the questions
|could be answered by strictly syntactic transformation.

But in a sense we are not.  The idea was that we would generate a test that
tested for understanding, and carefully construct to show up cheating.

Now, if the system passes the test do we question the test or the assumption
that machines cannot understand?   Which is the weaker assumption?  I am
not sure, it would depend greatly on the nature of the test.

To go back to your 'Pekinese' example, what if the test included actual
interactions with dogs?  (I.e. a practicum rather than just a oral or
written test).  At what point does the test become sufficiently strong
that deciding it is inadequate becomes an unacceptible alternative
compared to deciding the the machine understands?

True, this is no longer exactly the Turing Test, but I think we mostly agreed
that the TT is not really sufficient anyway.  [Hmm, in fact as I remember it
the current thread has already brought in academic testing methods, which
most certainly do include practicums - like labs and such].

The point is that we did *not* assume anything like you suggest.  We *might*
conclude that, but only if the test were insufficiently strong to begin with.

|]i.e., grasps meanings.  I don't see why I must argue anything
|]about the mechanism of understanding in order to be confident in
|]my conclusion, irrespective of whether the subject is a human or
|]a machine.
|
|You don't have to argue the mechanism of understanding, you have to
|argue that the mechanism for answering questions is understanding.
|Otherwise testing by question does not reveal understanding.

I guess I believe that a sufficently stringent test can be devised so that
there is very little scope for answering appropriately without understanding.

If so, then an appropriate test response becomes evidence tending to support
the hypothesis of understanding.

To eliminate this support it is necessary that it is a practical possibility
to devise a method of cheating that passes the test.  [Not just a theoretical
possibility - since many things that are possible in theory cannot be achieved
in practice].

In short, I take such testing as 'circumstantial' evidence, not direct
evidence.

|(1) Humans answer questions by knowledge and understanding; therefore
|when a human answers a question we have evidence of knowledge and
|understanding in the human.
|
|(2) Machines answer questions by syntactic manipulation; therefore
|when a machine answers a question we have evidence of good syntactic
|manipulation.
|
|Those are the points we can both agree on.

Not entirely.  I would say that (2) begs the question.  It is what we are
trying to determine.  (At least assuming there is a clear distinction between
syntax and semantics).  We *don't* know all possible mechanisms by which a
machine might attempt to pass a test, so (2) is ahead of itself.

| Now if you want to claim
|that your test shows understanding on the part of the computer, your
|options are limited (as far as I see) to the following possibilities:
|
|(A) Show that understanding is the same as syntax maniuplation.
|
|(B) Show that computers answer questions through understanding
|regardless of any other mechanism they may have.
|
|(C) Show why question-answering is a good test for understanding in a
|computer even though computers don't answer questions by
|understanding.
|
|(D) Define "understanding" as the ability to answer questions.  (Of
|course you are no longer talking about the same thing I am, your side
|of the argument becomes trivially true, and sentence (1) becomes
|meaningless.)

Not quite.  I would revise C slightly, since it includes as an assumption
the thing we are trying to test.

|Actually, the self-awareness is not really part of the argument except
|to show why understanding is not identical to syntactic manipulation.
|If think understanding _is_ identical to syntactic manipulation (A)
|then please say so specifically.

Actually I don't really.  But I am not sufficiently sure about the issue
to decide categorically that syntax cannot generate understanding.

I *also* think it is premature to state categorically that machines are
limited to syntactic manipulation, since we do not yet have a clear idea
of how the other sorts of 'manipulation' work internally.  I think it entirely
likely that theere is no mechanism in a human that cannot be duplicated by
some machine (not necessarily a digital computer, just some machine).
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



