From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!elroy.jpl.nasa.gov!ames!olivea!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:17:26 EST 1992
Article 3123 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!elroy.jpl.nasa.gov!ames!olivea!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan24.175613.7947@aisb.ed.ac.uk>
Date: 24 Jan 92 17:56:13 GMT
References: <11775@optima.cs.arizona.edu> <1992Jan23.215711.6793@gpu.utcs.utoronto.ca>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 68

In article <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <11775@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>>In article  <1992Jan22.203042.453@gpu.utcs.utoronto.ca> Andrzej Pindor writes:
>>
>>]On a more serious note - has it ever happened to you that you had this self-
>>]awareness of understanding and then you decided that you really did not
>>]understand the said problem? I am sure it did. Try to remeber now what made
>>]you to realise that you did not understand?
>>
>>This is irrelevant.  I never said that the ability to answer questions
>>in humans cannot be used to judge understanding.  In fact, I
>>specifically said that it could.
>
>I did not realise that you insist on applying different criteria to establish
>understanding in a human and in a machine (which you have stated clearly
>in another posting). In a case of such a severe anty-machine bias (:-))

I don't think there's anything _necessarily_ wrong with using
different criteria.

Advocates of the Turing Test often make it sound as if we won't
know anything at all about how the programs in the machines go
about it.  We won't know if they have some immense table of
conversational situations, or if they generate resposes rather
than looking them up, or whatever.

But it seems likely that we will know quite a bit about what the
programs are like.  Maybe not.  Maybe they were trained up as
neural nets and we really don't know how they work in detail.
But it's a possibility that we'll know a lot about them.

Maybe it's not "fair" to use all our knowledge, but, if we want to
get the best answer we can, we shouldn't pretend that some of our
knowledge doesn't exist.

A common rejoinder at this point is to suggest that maybe humans
work in the same way as the machines.  But either we can tell they
don't or else this is just the fairness point again: to be fair we
have to know the same amount (so to speak) about both humans and
machines.  But we don't have to be fair by pretending we don't
know some of what we know.

>discussion may be futile. However let me ask what it would take to convince
>you that a machine understands?

You could start by showing me the machine and telling me something
about how it works.

Until we have such machines or at least know more about them, I
don't think this is a question that can be answered.

>But please give me a practical answer, and
>not some vague statements which have no practical value.

Please note that there isn't much in the way of practical reasons to
want to answer the question.  We don't have any machines that we want
to test for understanding, so it's not like we'd better solve some
important problems about, say, their status as persons so that we
can draft appropriate legislation.

>I am not a hard-core AI supporter, but I have yet too see a convincing any-AI
>argument (i.e. that a machine can not be made to duplicate all functions of
>a human mind). All this talk about self-awareness, feelings, pain etc. etc.
>is a waste of time till we have _objective_ ways of detecting them.

Let no one say there has been no demand for objective tests.

-- jd


