From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:09 EST 1992
Article 3032 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan22.222002.7137@aisb.ed.ac.uk>
Date: 22 Jan 92 22:20:02 GMT
References: <1992Jan21.174159.29963@oracorp.com>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 88

In article <1992Jan21.174159.29963@oracorp.com> daryl@oracorp.com writes:
>Jeff Dalton writes:
>
>>>Why does anyone have to _show_ that it is impossible? The Turing Test
>>>isn't a proof of intelligence, it is just supposed to constitute
>>>empirical evidence. 
>
>> Why is it such good evidence?  Because it works for people?
>> So what?
>
>You missed my point. I don't understand why you (or your hypothetical
>person) is willing to believe that "correct behavior without
>understanding" is impossible, while he is not willing to believe that
>"correct behavior is sufficient to indicate understanding". Since they
>are logically equivalent, why would you require more evidence in the
>latter case than in the former case?

I didn't miss your point.  However, you failed to quote any of
the parts of my message in which I attempted to clear up your
confusion, and it seems to me that you're just ignoring them,
along with all the other messages in which I tried to explain
this.

Let me restore the context and try explaining once more:

You wrote:

  I just don't get it. You believe that "conversation without
  understanding is impossible", but you don't believe "conversation
  implies the existence of understanding"? I don't understand the
  distinction

I replied:

  The distinction is between believing X and being willing to
  rely on X as a test of something.

and I also wrote:

  Thinking something is true and thinking one has good reasons
  for thinking it true are two different things.  There's really
  no mystery about this.

I wrote at greater length in other messages, which I thought you
might have read, so I didn't explain at length again.  However,
it's really quite simple.

You say: look there are these two beliefs, and they're equivalent!
Or: How can anyone believe this and not believe that?  They're
equivalent!

But I am not comparing two beliefs.  I am conparing a belief
with a test, namely the Turing Test.  We can even apply your
equivalence between the beliefs, as you state them, and get
one belief.  Call it X.  To repeat: believing X and being willing
to rely on X as a test of something are not the same.  Or:
thinking something is true and thinking one has good reasons
for thinking it true are two different things.

But that whole approach to explanation may be too abstract,
so let's go back closer to the original dispute.

Someone could believe "conversation without understanding is
impossible".  They could also regard it as equivalent to
"conversation implies the existence of understanding".
But they might not think they have very good reasons for
holding such beliefs.  Maybe they're just intuitions.

On the other hand, maybe they think that have an ironclad argument to
the effect that computers can't undertstand.  When they put this
together with their belief, and a computer that can converse comes
along, they have to decide whether to go with their belief that
"conversation without understanding is impossible" (or that
"conversation implies the existence of understanding") or whether
to go with their argument (or, if you really want, their belief
that the argument is correct).

Now you might say how can anyone believe those things about 
conversation and understanding and also believe the argument
is correct.  Easy: they also believe computers cannot converse.
So why do they need their argument?  To convince other people
-- who think computers might be able to converse, perhaps --
that computers cannot understand.  Or to convince themself,
when they have doubts about the connection between conversation
and underdstanding, that computers couldn't understand even if
they could converse.

-- jd


