From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:36 EST 1992
Article 2938 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence testing
Message-ID: <6024@skye.ed.ac.uk>
Date: 20 Jan 92 23:33:29 GMT
References: <1992Jan14.015806.23985@oracorp.com> <5982@skye.ed.ac.uk> <1992Jan15.185342.11589@aifh.ed.ac.uk> <5993@skye.ed.ac.uk> <1992Jan16.122937.23838@aifh.ed.ac.uk> <6000@skye.ed.ac.uk> <1992Jan17.161938.20312@aifh.ed.ac.uk> <6013@skye.ed.ac.uk> <1992
Jan20.143839
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 286

In article <1992Jan20.143839.4757@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>In article <6013@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>>In article <1992Jan17.161938.20312@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
>
>I suggested what I thought Jeff was arguing in this thread:
>>>[arguing that "believing that understanding is necessary for
>>>conversation" is not inconsistent with "believing Searle's Chinese Room
>>>is a convincing argument against the Turing test being a valid way to
>>>test for understanding" (I hope that's a fair statement of the position?)]
>
>Jeff replied
>>Unfortunately, it's not.  In particular, I have not said that those
>>two beliefs, as stated above, are consistent.
>
>Well it appears I am having trouble understanding what your point is.

No kidding.

>I thought this thread got started because you disputed my implication
>that Searle believes that conversation without 'understanding'
>(intentionality) is possible. 

That's right.

>I had said that his Chinese
>Room argument _requires_ that something _could_ exist that had human
>behaviour but didn't have the corresponding 'mentality'. 

Even if we supposed that his argument does require it, that wouldn't
mean that Searle had to think it was possible.

But to say "requires" here seems somewhat strange.  

>You said
>that I was incorrect to say that Searle believed the behaviour was
>possible without 'understanding', that in fact he could well believe 
>the opposite, that the behaviour was not possible without understanding
>(Belief 1).

Ok.

> I took it as fair to say that _Searle_
>believes his Chinese Room argument to be a convincing way to reject the
>Turing test as a test for understanding (Belief 2).

I'm not sure exactly what it means to think X is a "convincing way
to reject" Y.

Perhaps Searle just thinks it's a good way to convince supporters
of Strong AI (and the general public) that computers, no matter
how good their performance, even if they could pass the TT, would
not be capable of understanding.  This point can be made independently
of whether or not such performance is possible.

For instance, one could argue that even if horses could run at 200
miles an hour they wouldn't be able to get from here to London in 10
minutes.  But you'd say that anyone who makes this argument must think
it is possible for horses to run at 200 mph!

>In my statement above I was careful to avoid ascribing either belief to
>you (as you have shown a tendency to resent this) and indeed I didn't
>even ascribe them to Searle (to save further distraction on that point
>too).

Resent it?  I don't think so.  In any case, I thought the issue
was what Searle (or someone else making the argument) could believe.
So why would it be a distraction?

>But I thought it was clear in your previous posts that you were
>trying to show that it would not be inconsistent for you (or Searle)
>to hold both beliefs (even if you were only holding them hypothetically).

I was trying to show that someone could have (Belief 1) and
nonetheless make Searle's argument about the Chinese Room.

You seem to think that anyone who makes an argument has to believe
certain things that are consistent with various suppositions used in
the argument.  But they don't.  They don't even have to think the
argument is a valid one.  Perhaps they are just arguing in the terms
their opponents accept.  (See various things by Feyerabend, for
example.)

This seems such an obvious point to me that I'm amazed we've spent
so much time on it.  

>If this was not what you were talking about I apologise, and look
>forward to your clarification of what you were talking about. 

Well, I've tried, again, above.

>Do you then agree that Searle's argument against the Turing Test
>_does_ require that something could behave intelligently yet lack
>intelligence?

I'm not sure what you mean.  I might agree, for some senses of
"require".  But you seem to think it implies various things I don't
agree with, so maybe I shouldn't agree here either.

>>Moreover, the complaint about the Turing Test is that using it
>>to show that "the system understands" is begging the question,
>>and not that the TT doesn't work.  It might turn out to work.
>>But we need more than that to show "the system understands".
>
>Again, I must have been mistaken in thinking we were discussing
>_Searle's_ complaint about the Turing Test, which, embodied in his
>Chinese Room example, is an argument that _no_ program would be sufficient
>for mentality no matter how well it could produce intelligent behaviour
>in a computer. 

The Chinese Room is to show that it isn't enough to implement
(sometimes he says "instantiate") a program.  Strong AI says implement
the program, get understanding.  The Room implements but doesn't
understand.

The Chinese Room is an argument against the idea that all you need for
understanding is to have the right program.  However, it also serves
as an example of something that has the right behavior but nonetheless
doesn't understand.  So when someone comes along and says "the system
understands", Searle says (among other things) something to the effect
that the only reason for supposing the Room understands is that it
behaves like it does; but, Searle says, that's begging the question
because one of the things at issue in the CR is the adequacy of the
Turing Test.

So you're right that Searle presents the CR as a counterexample to
the Turing Test.  However, I still think we should bear in mind that
something can be question-begging becuase it fails to demonstrate
it's conclusion -- even if it later turns out to be true.

>>Well, you've taken what I actually said and turned it into a much
>>stronger claim: "would consider this a far more logical course".
>>Indeed, you keep doing that sort of thing, as if anyone who said
>>what I said must also believe something stronger.
>
>>_If_ I were sufficiently convonced by Searle's argument, then
>>I would consider it more logical to conclude that the computer
>>didn't understand than to believe that complex processing could
>>cause understanding.
>
>So, _are_ you convinced by Searle's argument? 

Here I was just using myself as an example: if I thought this, I could
think that, etc.  Hence the "_if_".  Evidently this was more confusing
than helpful.  The point here is that someone could indeed think it
more logical.

I'm not sure it matters what my actual views are on this question, but
if you want to know, here's what I wrote way back in November, before
we even started this exchange:

   In my opinion, the debate stands as follows:

   1. We don't know enough about brains, humans, programs, or what
      machines are capable of to say that machine intelligence is
      definitely possible, much less how it would work.

   2. We might nonetheless be able to show that Searle has failed
      to prove his case.  I'm inclined to think he has failed, though
      I don't think I could state, right now, just what arguments
      convinced me of this.

   3. Searle might nonetheless have refuted or damaged some of the
      arguments common in the AI community.  I think he has at least
      seriously damaged the Turing Test.  We might some day be able
      to show that the right behavior cannot be accomplished without
      real understanding, but we cannot do that now.

And that's still more or less what I think.

>When you first started
>posting on this topic you certainly gave the impression that you thought
>Searle's argument was an important point (if not the only point) against
>the Turing test.

No, I think there are independent arguments against the Turing Test.
However, the Chinese Room may make it easier for people to see these
problems.  When the debate was about whether computers could ever
equal human performance in, say, Chess, or in solving practical
problems in the world,

> I don't see much purpose in arguing against someone who
>counters my arguments with "well, you might have rebutted that point, but I
>only proposed it hypothetically so you haven't succeeded in attacking my
>_real_ position".   

I think you've just misunderstood my use of the hypothetical.  I
don't think there's much hope of straightening it out through more
net articles, given how little we've been able to understand each
other.  But I guess I have to say this.  In many cases, the positions
you've attacked haven't even been ones I've made hypothetically.
Eg, I haven't argued that an "idea should be abandoned entirely
because we are not 100 percent dead-certain".

>You said:
>>>>The arguements for acepting the TT right now do look rather like
>>>>residual operationalism and behaviorism.  They often involve saying
>>>>(or implying) that there's no way to test for "real understanding",
>>>>that the question of "real understanding" is meaningless or
>>>>unscientific, and so on.
>
>I attempted to explain (in admittedly very vague terms) what
>operationalism is about, and why accepting the Turing test in no way
>requires adopting such a radical philosophical position:

Yes, I know you did.  Nonetheless, people do make those arguments.
Indeed, I think that many (on the net at least) support the Turing
Test just because talk of "understanding" and the like seems to
ill-defined and untestable.

>>>In other words, arguments for the Turing Test do not involve saying or
>>>implying that there is nothing more to "understanding" than the
>>>behaviour.
>
>And you replied:
>>Maybe so, but what I said was that they often involve saying (or
>>implying) that there's no way to test for "real understanding", that
>>the question of "real understanding" is meaningless or unscientific,
>>and so on. 
>
>A bit repetitive but I take it you have then accepted my original point
>that the Turing Test is not _inherently_ behaviourist, and that to
>dismiss the _test_ as "behaviourist and operationalist" is a mistake.

I don't think what Searle says about residual behaviorism and
operationalism is a mistake, unless it becomes confusing.  I
think some of the reasons why some people thought behaviorism
might be a good idea are at least very similar to some of the
reasons why some people think the Turing Test might be a good
idea.  But I don't think the Turing Test is something Skinner
might have thought of, for example.

>Rather, _some_ supporters of the test are being behaviourist when they
>argue "it is valid because the behaviour _is_ the intelligence", and you
>object to such arguments. Is this a fair statement of your position?

Of part of it, perhaps.  Some people have argued that we should
define "understanding" in terms of behavior and indeed take the
TT as virtually a definition of understanding.  Others complain
that "understanding" is hopelessly ill-defined and want to know
what "test" will show whether it is there or not (the implication
being that there isn't one).  Whether they'd actually say the
behavior "_is_" the intelligence, I don't know.  But then I'm
not trying to say they are behaviorist in the sense in which (I
think) you would use the word.

>(Why do I get the feeling you are going to say "no"?) If it is, which
>particular supporters of the Turing test do you have in mind as people
>who suscribe to this argument to support it?

Is this a request for information, or just a way to say "I bet
there aren't any"?

>BW
>
>P.S. You asked:
>>You said operationalism was part of behaviorism.  Is this
>>cognitivism also part of behaviorism?
>
>Are you joking? 

No.  It just wasn't clear to me what you were saying.  You said
that operationalism was part of behaviorism and that operationalism
was overtaken by cognitivism.  Well, where did that leave behaviorism?
Or maybe it was behaviorism that was overtaken by cognitivism.

>If you don't know enough about behaviourism to know that
>'cognitivism'  (considering cognitive and mental processes
>as valid causal factors that must be considered in explaining human
>psychology) was the rejection of it, then you really shouldn't be using
>the term.

It's been clear from the start that, with even less against me,
that you don't think I should be usign the term as I have.  Will
that help, if I don't use it any more?  And maybe complain from
time to time about how Searle uses it?

BTW,

  In article <1991Dec9.140719.28708@aifh.ed.ac.uk> bhw@aifh.ed.ac.uk (Barbara H. Webb) writes:
  >One thing I find odd in discussions of the Turing test is that people
  >accuse it of being behaviourist. For example, Jeff Dalton, who I think
  >was responsible for the subject line; 

I was not responsible for the subject line.

-- jd


