From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!psinntp!psinntp!scylla!daryl Thu Apr 16 11:34:45 EDT 1992
Article 5120 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!psinntp!psinntp!scylla!daryl
>From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: The Systems Reply I
Message-ID: <1992Apr14.012458.7058@oracorp.com>
Organization: ORA Corporation
Date: Tue, 14 Apr 1992 01:24:58 GMT

jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>>I can only reiterate what I have said before.  If you wish to show that
>>computers lack something that humans possess, it seems to me that you
>>need to show (a) that computers lack it, and (b) that humans possess
>>it. If you only prove (a) then you have not proved your point.
>
>Not to your satisfaction, perhaps.  I see no reason to _prove_ that
>humans have understanding in the sense required for the Chinese Room,
>for instance. I see no more need to prove this than to prove that
>my coffee cup is not the most intelligent being in the universe.

I sincerely promise never to ask you to prove that your coffee cup is
not intelligent. As a matter of fact, I would prefer it if you never
talked about your coffee cup again.

I don't doubt human understanding. What I find dubious is when Searle
(or whoever) says "Conscious minds have such and such property, which
computers lack". There is no reason for me to believe that all
conscious minds have some property unless I at least have reason to
believe that *human* minds have the property.

You keep accusing me of being a verificationist, but I am not asking
for proof, I am only asking for reasons to believe. Introspection is
perfectly fine as a means of demonstrating properties of the mind.
However, my introspection does *not* show me that my brain is not
working like a computer.

> What interests me in this is whether or not computers can understand,
> and not in whether or not I can convince a determined skeptic that
> humans can understand.  Other people may have different interests, of
> course.

Nobody is disputing that humans understand, just like nobody is
disputing that your you-know-what is not the most intelligent being in
the universe. However, if you rephrased the question "Can computers
understand?" to be "Can computers do what we call 'understanding' when
done by humans?" then it becomes clearer that the answer must involve
comparing what humans do with what computers do.

> Now, if an argument against computer understanding also applied to
> humans, I would regard that as reason to conclude the argument was
> wrong. But I'm certainly not going to conclude the argument is wrong
> just because no one has yet shown it doesn't apply to humans. Why
> should I?

Why should you conclude that it is right? It is an incomplete
argument, an argument with steps missing. It doesn't show anything
until those steps are filled in. And the steps are not showing that
humans are capable of understanding, it is in showing that humans
have whatever it is claimed is necessary for intelligence.

Daryl McCullough
ORA Corp.
Ithaca, NY











