From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Tue Apr  7 23:22:15 EDT 1992
Article 4717 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: What comes after the Systems Reply?
Message-ID: <1992Mar25.165051.21026@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Mar18.035719.3394@psych.toronto.edu> <6428@skye.ed.ac.uk> <1992Mar18.221543.6924@gpu.utcs.utoronto.ca> <6518@skye.ed.ac.uk>
Date: Wed, 25 Mar 1992 16:50:51 GMT

In article <6518@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Mar18.221543.6924@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>>I can't speak for others, but I am certainly not the only one who falls back
>>on TT not because it 'is just a way to avoid facing the question of what 
>>mechanisms are required (because -- they say -- anything that generates
>>the right behavior will do)', but because it is the way in which we judge
>>understanding in other people and because there is nothing else available.
>
>I discussed this at length with Daryl McCullough.  I don't think it's
>true that that's the way we judge other people; but even if it were,
>we can have reasons that don't apply to machines for thinking the TT
>works for people (see my exchange with Daryl), and we can have reasons
>for thinking that computers following programs produce the behavior
>in ways that do not involve understanding (see articles from Gudeman
>and others).
>
Unless you are referring to a private discussion with Daryl McCullough, I do
not recall you specifying any other way you use to judge if other people
understand (except of course that 'they are like me', but this would restrict
understanding to humans only by definition). Could you perhaps be so kind as 
to briefly reiterate the point?
My recollection also is that Gudeman did not reply to this question either 
(what other way, except behaviour, does he use to judge that other people 
understand). I might have of course overlooked his reply, but I am genuinely
curious what other way of judging understanding might there be.

>>You yourself also avoid the question what other 
>>mechanisms are required 
>
>I think you may be mistaking the nature of the argument.  The
>argument is not: mechanims M is required, and computers lack M.
>The only argument for the existence of a machanism computers lack
>is: computers can't undertand; humans can; therefore humans must
>have something comuters lack.
>
It looks like you are right - I've thought that we discuss IF computers can
possibly understand (isn't it what the Searle's CR construction is about?).
If we take as a premise that 'computers can't undertand', what is the 
discussion about?
Now, if we are discussing IF computers can understand and I say: 'the computer
gives correct answers, therefore it understands' and you reply: 'not 
necessarily, it depends how it works', then aren't you implying that there
is some mechanism M required (although you do not seem to know even vaguely 
what it is) for understanding and if computers do not have it they do not have
understanding, right responses notwithstanding?
In other words, how can you determine that a computer cannot understand,
even though it gives correct answers, if you do not require it to have 
a special mechanism which it lacks? 

>is that computers lack.  But perhaps we might find out, once we
>know more about (a) how humans work and (b) how programs that
>pass the Turing Test (if there are such programs) work.
>
>Indeed, my view is that it is still an open question whether or not
>computers can understand (just by having the right program).
>
Glad to hear this, but what criteria are you going to use to answer this 
question?

>-- jd


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


