From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Apr  7 23:22:10 EDT 1992
Article 4708 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!cs.utexas.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: What comes after the Systems Reply?
Message-ID: <6518@skye.ed.ac.uk>
Date: 24 Mar 92 20:31:55 GMT
References: <1992Mar18.035719.3394@psych.toronto.edu> <6428@skye.ed.ac.uk> <1992Mar18.221543.6924@gpu.utcs.utoronto.ca>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 42

In article <1992Mar18.221543.6924@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <6428@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>.....
>>but on the whole the pro-AI argument seems to be stuck at the Systems
>>Reply and the Turing Test.  I don't think there's much to be said
>>about the Systems Reply beyond "Searle is just the CPU, so how would
>>he know?", and the Turing Test is just a way to avoid facing the
>>question of what mechanisms are required (because -- they say --
>>anything that generates the right behavior will do).
>>
>I can't speak for others, but I am certainly not the only one who falls back
>on TT not because it 'is just a way to avoid facing the question of what 
>mechanisms are required (because -- they say -- anything that generates
>the right behavior will do)', but because it is the way in which we judge
>understanding in other people and because there is nothing else available.

I discussed this at length with Daryl McCullough.  I don't think it's
true that that's the way we judge other people; but even if it were,
we can have reasons that don't apply to machines for thinking the TT
works for people (see my exchange with Daryl), and we can have reasons
for thinking that computers following programs produce the behavior
in ways that do not involve understanding (see articles from Gudeman
and others).

>You yourself also avoid the question what other 
>mechanisms are required 

I think you may be mistaking the nature of the argument.  The
argument is not: mechanims M is required, and computers lack M.
The only argument for the existence of a machanism computers lack
is: computers can't undertand; humans can; therefore humans must
have something comuters lack.

Note that nothing in this argument requires that we know what it
is that computers lack.  But perhaps we might find out, once we
know more about (a) how humans work and (b) how programs that
pass the Turing Test (if there are such programs) work.

Indeed, my view is that it is still an open question whether or not
computers can understand (just by having the right program).

-- jd


