From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Fri Jan 31 10:26:56 EST 1992
Article 3259 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Intelligence Testing
Message-ID: <1992Jan29.184739.28091@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> <1992Jan24.175613.7947@aisb.ed.ac.uk> <1992Jan28.163046.13482@gpu.utcs.utoronto.ca> <1992Jan28.224548.9172@aisb.ed.ac.uk>
Date: Wed, 29 Jan 1992 18:47:39 GMT

In article <1992Jan28.224548.9172@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Jan28.163046.13482@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>
>>Since you
>>do not know how understanding arises in the human brain (the only system you
>>are convinced has an ability to understand), how are going to tell, by looking
>>inside the machine, whether it understands or not? 
>
>You may recall that I said we don't yet know enough.
>
So you admit that looking inside may be useless, since you do not know how to
recognize understanding? And that there is no chance that by looking inside
you will ddecide that a machine understands? In other words after looking
inside you will either declare: 'it does not understand (e.g. because it works
by table look-up)' or 'I do not know'. Hence by demanding to see what's inside
you already exclude a possibility of a positive verdict - hardly a fair
proposition.

>>>Until we have such machines or at least know more about them, I
>>>don't think this is a question that can be answered.
>>>
>>In other words: until we have a machine which understands or know how to build
>>one, we are can't say how to recognize if a machine understands. Is this what
>>you are trying to say?
>
>Why do you think we can answer this quesiton for machines that don't
>exist and whose programming we know almost nothing about?  Is it
>because you think it doesn't matter how they're programmed (ie,
>how the programs work)?  Since I think it does matter, I think we
>need to know something about it.
>
You do not answer my question (or answer it again with another question - 
don't you think it looks like avoiding an answer?) if I correctly paraphrase
your opinion (see below).

>On the other hand, if some of these machines were wandering around
>we might have to do the best we can even if we didn't know about the
>programming.  At least then we'd have some concrete evidence to go
>on.
>
Which machines (see below)?
>>It looks to me like an error in elementary logic.
>
>Which error, exactly?
>
Let me repeat it: You seem to be saying that we can't say how to recognize if
a machine understands until we have a machine which understands. Isn't it
a contradiction in itself? Please look again at the exchange above if you say
that I misrepresent your opinion and show me where this is the case.
Now if we do not now how to recognize if a machine understands we can't have
'some of these machines wandering about', since we can't now that they are
'these' machines. Am I missing something here? What is your resolution of this
obvious contradiction?

>>I am submitting a 'thought experiment' : say we have a machine which
>>seems to understand (CR for instance). You say that to be convinced
>>you have to know how it works.
>
>That's right.
>
>>Fine, I say, but tell me first what do you have to find out to
>>accept that the machine understands. 
>
>How it works.  For more, see above.
>
That's not good enough. I want to know if there is _any_ chance that you give
a positive answer, so I want to know seeing what would make you to conclude
that a machine understands. Since you avoid specifying this I have to conclude 
that _whatever_ you see you will not utter the words 'OK, it understands'.
Please state clearly under what circumstances (i.e. seeing what) would you be
convinced (even partly) that a machine understands. Another question won't
do. Alternatively admit that there are no such circumstances and your answer,
_whatever_ you see can only be negative, or at best 'I do not know'.

>>My question is only fair if I suspect you of bias (and I do)
>
>Just what bias to you suspect me of?  
>
>>and think that _whatever_ you find you will not be happy, since you
>>do not know how understanding arises (of course I do not know either
>>and that is why I, grudgingly, accept operational test; that's the
>>best I can do and that's what everybody does). Your refusal to specify
>>what you are looking for only confirms my suspiscion that even before
>>looking inside you 'know' that the machine does not understand. I
>>don't find this a fair discussion.
>
>Well, then I guess I don't have the bias you suspect me of at all.
>I think machines might be able to understand even if they don't pass
>the Turing Test.  (I also think they might not understand even if they
>do pass it.)
>
I am happy (:-)) to see that you have no anti-machine bias, but you are biased
approaching the thought experiment above - you know in advance that under NO
circumstances (as the things are at present) can this experiment produce 
a positive answer on your part. So there is no point in doing the experiment
in the first place. I do not think you can prove that humans do not do a very
clever look-up. I am sure that one can point out many situations when humans
reply to a situation (be it a conversation or not) using a table look-up
- like 'what I am supposed to say in this situation?' or ' when the car skids
to the left, which way do I turn the steering wheel?' etc. May be understanding
is just an ingenious way of indexing the information and doing the search in
subconsciousness. Do we know enough to be sure that this is not the case?
>>>Let no one say there has been no demand for objective tests.

>>>
>>Right and in spite of this demand, no such tests have been devised.
>>Doesn't this tell you that perhaps we may have to settle for the
>>'operational test'?
>
>Maybe will have to settle for it, if some such machines appear.
>But we don't have to settle for it now.

And how would we know that they are 'some such machines'? (see above).
-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


