From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor Tue Jan 28 12:18:41 EST 1992
Article 3214 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Intelligence Testing
Message-ID: <1992Jan28.163046.13482@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <11775@optima.cs.arizona.edu> <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> <1992Jan24.175613.7947@aisb.ed.ac.uk>
Date: Tue, 28 Jan 1992 16:30:46 GMT

In article <1992Jan24.175613.7947@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>discussion may be futile. However let me ask what it would take to convince
>>you that a machine understands?
>
>You could start by showing me the machine and telling me something
>about how it works.
>
You answer my question with another question! What is the point of telling you
how the machine works if you do not know what you are looking for? Since you
do not know how understanding arises in the human brain (the only system you
are convinced has an ability to understand), how are going to tell, by looking
inside the machine, whether it understands or not? In addition we may, and
probably do, have different notions what understanding is. 

>Until we have such machines or at least know more about them, I
>don't think this is a question that can be answered.
>
In other words: until we have a machine which understands or know how to build
one, we are can't say how to recognize if a machine understands. Is this what
you are trying to say? It looks to me like an error in elementary logic.

>>But please give me a practical answer, and
>>not some vague statements which have no practical value.
>
>Please note that there isn't much in the way of practical reasons to
>want to answer the question.  We don't have any machines that we want
>to test for understanding, so it's not like we'd better solve some
>important problems about, say, their status as persons so that we
>can draft appropriate legislation.
>
Didn't you point out that this is a philosophy group ? So why it is important 
that there are no practical reasons to want to answer such a question? 
I am submitting a 'thought experiment' : say we have a machine which seems to
understand (CR for instance). You say that to be convinced you have to know how
it works. Fine, I say, but tell me first what do you have to find out to accept
that the machine understands. My question is only fair if I suspect you of bias
(and I do) and think that _whatever_ you find you will not be happy, since you
do not know how understanding arises (of course I do not know either and that
is why I, grudgingly, accept operational test; that's the best I can do and
that's what everybody does). Your refusal to specify what you are looking for
only confirms my suspiscion that even before looking inside you 'know' that
the machine does not understand. I don't find this a fair discussion.

>>I am not a hard-core AI supporter, but I have yet too see a convincing any-AI
>>argument (i.e. that a machine can not be made to duplicate all functions of
>>a human mind). All this talk about self-awareness, feelings, pain etc. etc.
>>is a waste of time till we have _objective_ ways of detecting them.
>
>Let no one say there has been no demand for objective tests.
>
Right and in spite of this demand, no such tests have been devised. Doesn't this
tell you that perhaps we may have to settle for the 'operational test'?
>-- jd


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


