From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:26:52 EST 1992
Article 2855 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6013@skye.ed.ac.uk>
Date: 17 Jan 92 23:05:27 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk> <1992Jan17.161938.20312@aifh.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 131

In article <1992Jan17.161938.20312@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>In article <6000@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>[arguing that "believing that understanding is necessary for
>conversation" is not inconsistent with "believing Searle's Chinese Room
>is a convincing argument against the Turing test being a valid way to
>test for understanding" (I hope that's a fair statement of the position?)]

Unfortunately, it's not.  In particular, I have not said that those
two beliefs, as stated above, are consistent.

Moreover, the complaint about the Turing Test is that using it
to show that "the system understands" is begging the question,
and not that the TT doesn't work.  It might turn out to work.
But we need more than that to show "the system understands".

>Okay, so what you are saying is that, though you (and, as you originally
>argued, Searle) may _now_ believe that conversation requires
>understanding, _if_ someone was to come up with a computer program that
>allows a computer to converse, you would _then_ apply Searle's Chinese
>Room argument to demonstrate that the computer couldn't understand, and
>hence modify your belief to say that _human_ conversation involves
>understanding, but conversation without understanding is also possible.

I was suggesting a possibility.  Someone who thought conversation
required understanding could change their mind.  Whether someone
actually would change their mind might well depend on how much
confidence they had in Searle's argument, what they knew about
how the program worked, and so on.

>You would consider this a far more logical course than to instead
>believe that the very complex processing that goes on in the computer
>program is, like the complex processes in the brain, able to cause
>understanding. 

Well, you've taken what I actually said and turned it into a much
stronger claim: "would consider this a far more logical course".
Indeed, you keep doing that sort of thing, as if anyone who said
what I said must also believe something stronger.

_If_ I were sufficiently convonced by Searle's argument, then
I would consider it more logical to conclude that the computer
didn't understand than to believe that complex processing could
cause nderstanding.  Moreover, that the processing is complex,
and complex processes occur in the brain, is not much of an
argument if it's supposed to be a reason for concluding that
the result is understanding.

>The AI researcher/Turing test supporter might prefer to
>think that what was embodied in this clever program is in some way an
>_explanation_ of what we call understanding; but you and Searle would
>assure them that in fact they had by some happy accident stumbled onto
>a means of producing conversational ability that has no relation
>whatsoever to how humans do it. 

The AI researchers/Turing test supporters can prefer whatever
they prefer.  I would not, of course, presume to suppose that
all their hard work was the equivalent of a happy accident or
even that it had "no relation whatsoever" to how humans do it.

>>The arguements for acepting the TT right now do look rather like
>>residual operationalism and behaviorism.  They often involve saying
>>(or implying) that there's no way to test for "real understanding",
>>that the question of "real understanding" is meaningless or
>>unscientific, and so on.
>
>Look, operationalism involves [Etc]

There's the shared attitute, that we can be scientific by basing our
conclusions on behavior and that any talk of anything like subjective
experience is unscientific at best and possibly meaningless.

>Operationalism was popular in psychology (some time ago!) and was a part
>of behaviourism, which said that the external, observable, intelligent
>behaviour of a person literally _is_ the intelligence. This approach was
>overtaken by cognitivism which said that the behaviour is causally
>connected to the intelligence, which is a product of the processes of
>the brain. They hope to be able to find the explanation of this causal
>connection, but it's not yet understood; nevertheless they are working
>under the assumption that it exists. Now if someone says that a good way
>to detect intelligence is to observe the behaviour, are they being a
>behaviourist? Probably not.

You said operationalism was part of behaviorism.  Is this
cognitivism also part of behaviorism?

>In other words, arguments for the Turing Test do not involve saying or
>implying that there is nothing more to "understanding" than the
>behaviour.

Maybe so, but what I said was that they often involve saying (or
implying) that there's no way to test for "real understanding", that
the question of "real understanding" is meaningless or unscientific,
and so on.  But they may also be led to define understanding as
behavior, for which see below.

Now maybe the arguments _you_ would make don't say or imply anything
of the sort, but you are not the only person presenting arguments for
the Turing Test.

> Now I will admit that there are supporters of AI who do
>reject "understanding" or "intentionality" as unscientific or
>meaningless. 

It's nice to have one small point of agreement, at least.

>They do so because they see such concepts as incoherent, or
>as explaining nothing that cannot be explained by alternative concepts.

And they prefer the alternative concepts.

>They don't do so because they are in the grip of a "behaviourist
>fallacy". I have known a few dedicated behaviourists, and they wouldn't
>touch AI with a barge pole.

I like Searle's phrase "in the grip of an ideology".  (Ok, maybe
it's not original with Searle.)  The ideology that grips at least
some of them is somewhere in the area of the Positivist notion
that the unverifiable is meaningless and Popper's notion that
the distinguishing character of a scientific claim is that it
can be falsified.  They want to be able to make a scientific
claim that a computer understands, and the TT looks like it will
let them do this.

That is, if the definition of "understands" is "passes the TT",
then "this computer understands" is a nice, meaningful, scientific
claim.

But this leads them to define understanding in terms of behavior.

-- jd


