From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aifh!bhw Tue Jan 21 09:26:39 EST 1992
Article 2830 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aifh!bhw
>From: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <1992Jan17.161938.20312@aifh.ed.ac.uk>
Date: 17 Jan 92 16:19:38 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk>
Reply-To: bhw@aifh.ed.ac.uk (Barbara H. Webb)
Organization: Dept AI, Edinburgh University, Scotland
Lines: 88

In article <6000@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:

[arguing that "believing that understanding is necessary for
conversation" is not inconsistent with "believing Searle's Chinese Room
is a convincing argument against the Turing test being a valid way to
test for understanding" (I hope that's a fair statement of the position?)]

>Indeed, I may have realized that I'm not going to convince anyone in
>AI that there's some conversational behavior that computers can never
>produce.  That's why I like Searle's argument: it applies no matter
>how good the behavior is.  And, if it works, it works even if I'm wrong
>in thinking that they'll never get the required behavior.

[and if faced with a computer that could converse (which I am happy to
agree is currently non-existant)]

>Or, I could say this:
>
>  It turns out that I do think conversation requires understanding.
>  But it now looks like I must be wrong.  Evidently there's some
>  clever trick that does it.  And so it's a good thing I've been
>  relying on Searle's argument all these years, because it handles
>  this case.
>
>I suspect that what you're getting at is that if I think conversation
>without understanding is impossible, then I should accept the Turing
>Test, because whenever there was conversation there would (in my view)
>have to be understanding.  Well, if I could _show_ that conversation
>was impossible without understanding, then I should indeed accept
>the Turing Test.  But I can't show it's impossible, and neither can
>the people who want us to accept the TT right now.

Okay, so what you are saying is that, though you (and, as you originally
argued, Searle) may _now_ believe that conversation requires
understanding, _if_ someone was to come up with a computer program that
allows a computer to converse, you would _then_ apply Searle's Chinese
Room argument to demonstrate that the computer couldn't understand, and
hence modify your belief to say that _human_ conversation involves
understanding, but conversation without understanding is also possible.
You would consider this a far more logical course than to instead
believe that the very complex processing that goes on in the computer
program is, like the complex processes in the brain, able to cause
understanding. The AI researcher/Turing test supporter might prefer to
think that what was embodied in this clever program is in some way an
_explanation_ of what we call understanding; but you and Searle would
assure them that in fact they had by some happy accident stumbled onto
a means of producing conversational ability that has no relation
whatsoever to how humans do it. 

>The arguements for acepting the TT right now do look rather like
>residual operationalism and behaviorism.  They often involve saying
>(or implying) that there's no way to test for "real understanding",
>that the question of "real understanding" is meaningless or
>unscientific, and so on.

Look, operationalism involves taking the 'operation of measurement' of
some attribute to literally _be_ that attribute: the classic example is
saying that the height of mercury in a tube _is_ the temperature of the
air. This is not a popular philosophical approach any more: people would
prefer to say that the height of mercury in the tube is _causally
connected_ to the attribute of temperature (the rate of movement of
molecules in the air), what's more, they can explain this causal
connection. Now if someone says that a good way to discover the
temperature is to look at the height of mercury in a tube, are they
being an operationalist? Probably not.

Operationalism was popular in psychology (some time ago!) and was a part
of behaviourism, which said that the external, observable, intelligent
behaviour of a person literally _is_ the intelligence. This approach was
overtaken by cognitivism which said that the behaviour is causally
connected to the intelligence, which is a product of the processes of
the brain. They hope to be able to find the explanation of this causal
connection, but it's not yet understood; nevertheless they are working
under the assumption that it exists. Now if someone says that a good way
to detect intelligence is to observe the behaviour, are they being a
behaviourist? Probably not.

In other words, arguments for the Turing Test do not involve saying or
implying that there is nothing more to "understanding" than the
behaviour. Now I will admit that there are supporters of AI who do
reject "understanding" or "intentionality" as unscientific or
meaningless. They do so because they see such concepts as incoherent, or
as explaining nothing that cannot be explained by alternative concepts.
They don't do so because they are in the grip of a "behaviourist
fallacy". I have known a few dedicated behaviourists, and they wouldn't
touch AI with a barge pole.

BW


