From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!tdatirv!sarima Fri Jan 31 10:27:14 EST 1992
Article 3290 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <390@tdatirv.UUCP>
Date: 29 Jan 92 21:13:50 GMT
References: <11775@optima.cs.arizona.edu> <1992Jan23.215711.6793@gpu.utcs.utoronto.ca> <1992Jan24.175613.7947@aisb.ed.ac.uk> <1992Jan28.163046.13482@gpu.utcs.utoronto.ca>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 52

In article <1992Jan28.163046.13482@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
|In article <1992Jan24.175613.7947@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
|>Until we have such machines or at least know more about them, I
|>don't think this is a question that can be answered.
|>
|In other words: until we have a machine which understands or know how to build
|one, we are can't say how to recognize if a machine understands. Is this what
|you are trying to say? It looks to me like an error in elementary logic.

Fairly close.  I would say that until we have a testable model of conscious
behavior (read "machine that appears to understand"), we have no basis for
making any real judgements about what can and cannot understand.

I call this the scientific method.  Always, always, *verify* everything with
independently observable evidence.  Right now we have no way of verifying
either position.  We have no model to evaluate, we have no usable reference
point other than our own minds.

The AI research program is *one* approach that may, in time, provide the
required evidence to resolve the question.  Neurology and psychology are
two others.  The most likely course is for all of the above to contribute
various pieces to the ultimate model.

|I am submitting a 'thought experiment' : say we have a machine which seems to
|understand (CR for instance). You say that to be convinced you have to know how
|it works. Fine, I say, but tell me first what do you have to find out to accept
|that the machine understands.

Personally, for now at least, I would be satisfied with ruling out simple
table look-up and preset answers.  I might also want to check that the answers
are in fact derived from internal models of some sort.


But, since I believe that in practice the cheating approaches to AI mentioned
above are unachievable, I would tend to accept the appearance of understanding
as tentative confirmation (not final, just provisional, pending further
analysis to see if the machine uses mechanisms concgruent to the mechanisms
used by biological systems).

| Your refusal to specify what you are looking for
|only confirms my suspiscion that even before looking inside you 'know' that
|the machine does not understand. I don't find this a fair discussion.

No, it is just that at the present state of knowledge it is not possible to
make a coherent, reliable set of criteria.

That is why I take the one existing criterion as a good first aproximation.

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



