From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:26:44 EST 1992
Article 3237 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan28.224548.9172@aisb.ed.ac.uk>
Date: 28 Jan 92 22:45:48 GMT
References: <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> <1992Jan24.175613.7947@aisb.ed.ac.uk> <1992Jan28.163046.13482@gpu.utcs.utoronto.ca>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 122

In article <1992Jan28.163046.13482@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Jan24.175613.7947@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>>discussion may be futile. However let me ask what it would take to convince
>>>you that a machine understands?
>>
>>You could start by showing me the machine and telling me something
>>about how it works.
>>
>You answer my question with another question! 

Life's like that sometimes.

>What is the point of telling you how the machine works if you do
>not know what you are looking for? 

In my view, whether or not it "understands" will depend on how it
works.  If it works like humans do, in the relevant ways, (which
I, but not Searle, think may be functional rather than physical),
then I'd say it understands.  If it works in some different way,
it would depend on just what that way was.  If it's table lookup,
for instance, I'd say it doesn't understand.

I don't know enough about humans or these non-existent machines
to tell you exactly how it will work out in all cases.  Fortunately,
since the machines don't exist, there isn't much urgency about deciding
whether they understand or not.

Nonetheless, I think we can answer the question of understanding
for some hypothetical machines, such as the table lookup one.
Maybe I can do this for your machine too, if you tell me how it
works.

>Since you
>do not know how understanding arises in the human brain (the only system you
>are convinced has an ability to understand), how are going to tell, by looking
>inside the machine, whether it understands or not? 

You may recall that I said we don't yet know enough.

>In addition we may, and probably do, have different notions what
>understanding is. 

Maybe so, but I may not care one way or the other about your notion,
just as you may not care about mine.  If you think mine is too ill-
define to care about, then we'd probably be better off not arguing
about it.

>>Until we have such machines or at least know more about them, I
>>don't think this is a question that can be answered.
>>
>In other words: until we have a machine which understands or know how to build
>one, we are can't say how to recognize if a machine understands. Is this what
>you are trying to say?

Why do you think we can answer this quesiton for machines that don't
exist and whose programming we know almost nothing about?  Is it
because you think it doesn't matter how they're programmed (ie,
how the programs work)?  Since I think it does matter, I think we
need to know something about it.

On the other hand, if some of these machines were wandering around
we might have to do the best we can even if we didn't know about the
programming.  At least then we'd have some concrete evidence to go
on.

>It looks to me like an error in elementary logic.

Which error, exactly?

>>>But please give me a practical answer, and
>>>not some vague statements which have no practical value.
>>
>>Please note that there isn't much in the way of practical reasons to
>>want to answer the question.  We don't have any machines that we want
>>to test for understanding, so it's not like we'd better solve some
>>important problems about, say, their status as persons so that we
>>can draft appropriate legislation.
>>
>Didn't you point out that this is a philosophy group ? So why it is important 
>that there are no practical reasons to want to answer such a question? 

If you want something of practical value, I suggest you wait until
we know more about machines, understanding, and the rest.  For now,
you'll have to put up with impractical philosophy.

>I am submitting a 'thought experiment' : say we have a machine which
>seems to understand (CR for instance). You say that to be convinced
>you have to know how it works.

That's right.

>Fine, I say, but tell me first what do you have to find out to
>accept that the machine understands. 

How it works.  For more, see above.

>My question is only fair if I suspect you of bias (and I do)

Just what bias to you suspect me of?  

>and think that _whatever_ you find you will not be happy, since you
>do not know how understanding arises (of course I do not know either
>and that is why I, grudgingly, accept operational test; that's the
>best I can do and that's what everybody does). Your refusal to specify
>what you are looking for only confirms my suspiscion that even before
>looking inside you 'know' that the machine does not understand. I
>don't find this a fair discussion.

Well, then I guess I don't have the bias you suspect me of at all.
I think machines might be able to understand even if they don't pass
the Turing Test.  (I also think they might not understand even if they
do pass it.)

>>Let no one say there has been no demand for objective tests.
>>
>Right and in spite of this demand, no such tests have been devised.
>Doesn't this tell you that perhaps we may have to settle for the
>'operational test'?

Maybe will have to settle for it, if some such machines appear.
But we don't have to settle for it now.


